Thank you for answer. We will improve our system based on what you said.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I'm indexing and searching documents using solr 6.x.
It is quite efficient when there are fewer shards and fewer cluster units.
However, when the number of shards exceeds 30 and the size of each shard is
30G, the search performance is significantly reduced.
Currently, usercache in solr is actively
Hello,
What is the difference between setting parameters via SolrQuery vs
ModifiableSolrParams? If there is a difference, is there a preferred
choice? I'm using Solr 4.6.1.
SolrQuery query = new SolrQuery();
query.setParam("wt", "json");
ModifiableSolrParams params = new ModifiableSolrParams()
Hello,
I may have missed this but, how do you specify a default core when using
the new-style
for the solr.xml? When I view the status of my Solr core setup (
http://localhost:8983/solr/admin/cores?action=STATUS) I see a
isDefaultCore speficiation
but, i'm not sure where it can from and and where
Spoke too soon. Hacking rocks!
Finally landed on this heuristic, and it works:
resourceURL:"http://someotherserver.org/";
On Thu, Nov 7, 2013 at 9:52 AM, Jack Park wrote:
> Figuring out a google query to gain an answer seems difficult given
> the ambiguity;
>
> I have
Figuring out a google query to gain an answer seems difficult given
the ambiguity;
I have a field:
into which I store a URL
which, when displayed as a result of a query, looks like this in the
admin console:
"resourceURL": "http://someotherserver.org/";,
The query "resourceURL:*" will find a
everything work.
On Sun, Nov 3, 2013 at 12:04 PM, Jack Park wrote:
> I now have a single ZK running standalone on 2121. On the same CPU, I
> have three nodes.
>
> I used a curl to send over two documents, one each to two of the three
> nodes in the cloud. According to a web query, th
I now have a single ZK running standalone on 2121. On the same CPU, I
have three nodes.
I used a curl to send over two documents, one each to two of the three
nodes in the cloud. According to a web query, they are both there.
My solrconfig.xml file has a custom update response processor chain
de
-cloud mode.
Thanks
Jack
On Fri, Nov 1, 2013 at 11:12 AM, Shawn Heisey wrote:
> On 11/1/2013 12:07 PM, Jack Park wrote:
>>
>> The top error message at my test harness is this:
>>
>> No live SolrServers available to handle this request:
>> [http://127.0.1.1:8983/solr
,
because those servers actually exist, to the test harness, at
10.1.10.178, and if I access any one of them from the browser,
/solr/collection1 does not work, but /solr/#/collection1 does work.
On Fri, Nov 1, 2013 at 10:34 AM, Jack Park wrote:
> /clusterstate.json seems to clearly state that al
ection reset by peer would suggest something in my code, but my
code is a clone of code supplied in a Solr training course. Must be
good. Right?
I also have no clue what is /127.0.0.1:39065 -- that's not one of my nodes.
The quest continues.
On Fri, Nov 1, 2013 at 9:21 AM, Jack Park wrote
you using?
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 1 Nov 2013, at 04:19, Jack Park wrote:
>
>> After digging deeper (slow for a *nix newbee), I uncovered issues with
>> the java installation. A step in installation of Oracle Java has it
>> that you -ins
the simple one-box
3-node cloud test, and used the test code from the Lucidworks course
to send over and read some documents. That failed with this:
Unknown document router '{name=compositeId}'
Lots more research.
Closer...
On Thu, Oct 31, 2013 at 5:44 PM, Jack Park wrote:
> Latest zo
Latest zookeeper is installed on an Ubuntu server box.
Java is 1.7 latest build.
whereis points to java just fine.
/etc/zookeeper is empty.
boot zookeeper from /bin as sudo ./zkServer.sh start
Console says "Started"
/etc/zookeeper now has a .pid file
In another console, ./zkServer.sh status return
. Neither
installation shows a log4j log file anywhere.
I have reason to believe I followed all the instructions in the
ZooKeeper Getting Started page accurately. Still, no real cigar...
Java on windoz is 1.6.0_31; on ubuntu it is 1.7.0_40
Thanks in advance for any hints.
On Thu, Oct 24, 2013 at
Background: all testing done on a Win7 platform. This is my first
migration from a single Solr server to a simple cloud. Everything is
configured exactly as specified in the wiki.
I created a simple 3-node client, all localhost with different server
URLs, and a lone external zookeeper. The online
Use a different server than default gets 4.5.1
On Thu, Oct 24, 2013 at 9:35 AM, Jack Park wrote:
> Download redirects to 4.5.0
> Is there a typo in the server path?
>
> On Thu, Oct 24, 2013 at 9:14 AM, Mark Miller wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash:
Download redirects to 4.5.0
Is there a typo in the server path?
On Thu, Oct 24, 2013 at 9:14 AM, Mark Miller wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> October 2013, Apache Solr™ 4.5.1 available
>
> The Lucene PMC is pleased to announce the release of Apache Solr 4.5.1
>
> Solr
Issue resolved. Not a Solr issue; a really hard to discover missing
library in my installation.
On Thu, Oct 10, 2013 at 7:10 PM, Jack Park wrote:
> I have an "interceptor" which grabs SolrDocument instances in the
> update handler chain. It feeds those documents as a JSON st
I have an "interceptor" which grabs SolrDocument instances in the
update handler chain. It feeds those documents as a JSON string out to
an agent system.
That system has been running fine all the way up to Solr 4.3.1
I have discovered that, as of 4.4 and now 4.5, the very same config
files, agent
e processor chains will be configured with the Run Update
> processor as the last processor of the chain. That's were the Lucene index
> update and optional commit would be done.
>
> -- Jack Krupansky
>
> -Original Message- From: Jack Park
> Sent: Wednes
If one allows for a soft commit (rather than a hard commit on each
request), when does the updateRequestProcessorChain fire? Does it fire
after the commit?
Many thanks
Jack
Thu, Jun 27, 2013 at 9:45 AM, Mark Bennett
wrote:
> Jack,
>
> Did you ever find a fix for this?
>
> I'm having similar issues (different parts of solrconfig) and my guess is
> it's a config issue somewhere, vs. a proper casting problem, some nested init
> issue.
>
As one of the early reviewers of the manuscript, I always had high
hopes for this work.
I now have the pdf from lulu; do not have time now to dive deeply, but
will comment that it seems, to me at least, well worth owning.
Jack
On Fri, Jun 21, 2013 at 11:41 AM, Jack Krupansky
wrote:
> Okay, it's
I presume you mean https://www.varnish-cache.org/
That's the first I'd heard of it.
Thanks
Jack
On Thu, Jun 20, 2013 at 10:48 PM, William Bell wrote:
> Who is using varnish in front of SOLR?
>
> Anyone have any configs that work with the cache control headers of SOLR?
>
> --
> Bill Bell
> billnb
In some sense, if all you want to do is send over a URL, e.g.
http://localhost:8993/, it's not out of the question to
use the java url stuff as exemplified at
http://www.cafeaulait.org/course/week12/22.html
or
http://stackoverflow.com/questions/7500342/using-sockets-to-fetch-a-webpage-with-java
Bu
Jack,
Why are multi-valued fields considered messy?
I think I am about to learn something..
Thanks
Another Jack
On Mon, May 13, 2013 at 5:29 AM, Jack Krupansky wrote:
> Try the simplest, cleanest design first (at least on paper), before you
> start resorting to either dynamic fields or multi-va
What I learned is that I needed to upgrade Ant, then needed to install
Ivy; the build.xml in the outer subversion directory has an ant target
to install Ivy, and one to run-maven-build. I ran that, then switched
to /solr and ran "ant dist" which finished in under 2 minutes.
On Sun, Apr 14, 2013 at
of diagrams. Lots of examples.
>
> -- Jack Krupansky
>
> -Original Message- From: Jack Park
> Sent: Wednesday, April 03, 2013 11:25 AM
>
> To: solr-user@lucene.apache.org
> Subject: Re: Flow Chart of Solr
>
> There are three books on Solr, two with that in the ti
There are three books on Solr, two with that in the title, and one,
Taming Text, each of which have been very valuable in understanding
Solr.
Jack
On Wed, Apr 3, 2013 at 5:25 AM, Jack Krupansky wrote:
> Sure, yes. But... it comes down to what level of detail you want and need
> for a specific ta
,
> Jens
>
>
> On 03/28/2013 08:15 AM, Upayavira wrote:
>>
>> Why don't you index all ancestor classes with the document, as a
>> multivalued field, then you could get it in one hit. Am I missing
>> something?
>>
>> Upayavira
>>
>> On
arallel, running query/queries on their local shards.
>
> Otis
> --
> Solr & ElasticSearch Support
> http://sematext.com/
>
>
>
>
>
> On Wed, Mar 27, 2013 at 3:11 PM, Jack Park wrote:
>> Hi Otis,
>>
>> I fully expect to grow to SolrCloud -- m
t.com/
>
>
>
>
>
> On Wed, Mar 27, 2013 at 12:53 PM, Jack Park wrote:
>> This is a question about "isA?"
>>
>> We want to know if M isA B isA?(M,B)
>>
>> For some M, one might be able to look into M to see its type or which
>> clas
This is a question about "isA?"
We want to know if M isA B isA?(M,B)
For some M, one might be able to look into M to see its type or which
class(es) for which it is a subClass. We're talking taxonomic queries
now.
But, for some M, one might need to ripple up the "transitive closure",
looking at
Is there a document that tells how to create multiple threads? Search
returns many hits which orbit this idea, but I haven't spotted one
which tells how.
Thanks
Jack
On Fri, Mar 15, 2013 at 1:01 PM, Mark Miller wrote:
> You def have to use multiple threads with it for it to be fast, but 3 or 4
harvest
The problem returns. It simply appears that I cannot declare a named
requestHandler using that class.
Jack
On Tue, Mar 12, 2013 at 12:22 PM, Jack Park wrote:
> Indeed! Perhaps the germane part is this, before the failure to
> instantiate notice:
>
&g
truly supported
> implementation…
>
> - Mark
>
> On Mar 12, 2013, at 2:53 PM, Jack Park wrote:
>
>> That messages gives great, but terrible google. Zillions of hits,
>> mostly filled with very long log traces, and zero messages (that I
>> could find) about what
That messages gives great, but terrible google. Zillions of hits,
mostly filled with very long log traces, and zero messages (that I
could find) about what to do about it.
I switched over to using that handler since it has an update log
specified, and that's the only place I've found how to use up
s from.
In any case, I might have successfully settled on how to choose which
update chain, but now I am deep into the bowels of update logs.
What am I missing?
Many thanks
Jack
On Mon, Mar 11, 2013 at 9:45 PM, Jack Park wrote:
> Many thanks.
> Let me record here what I have trie
stHandler config. Search for
> /update, duplicate that, and change the chain it points to.
>
> Upayavira
>
> On Mon, Mar 11, 2013, at 05:22 AM, Jack Park wrote:
>> With 4.1, not in cloud configuration, I have a custom response handler
>> chain which injects an additional
With 4.1, not in cloud configuration, I have a custom response handler
chain which injects an additional handler for studying the documents
as they come in. But, when I do partial updates on those documents, I
don't want them to be studied again, so I created another version of
the same chain, but
I found a tiny notice about just using quotes; tried it in the admin
query console and it works. e.g. label:"car house" would fetch any
document for which the label field contained that phrase.
Jack
On Fri, Mar 1, 2013 at 9:17 AM, Shawn Heisey wrote:
> On 3/1/2013 8:50 AM, vsl wrote:
>>
>> I wou
you.
>
> Michael Della Bitta
>
>
> Appinions
> 18 East 41st Street, 2nd Floor
> New York, NY 10017-6271
>
> www.appinions.com
>
> Where Influence Isn’t a Game
>
>
> On Sun, Feb 24, 2013 at 12:29 AM, Jack Park wrote:
>
848603",
"details": [
"here & there",
"Oh Fudge"
],
It appears that using the XMLResponseParser and getting the query
string right works!
Many thanks for all the comments.
Cheers
Jack
On Thu, Feb 21, 2013 at 5:45 P
d
judicious escaping of reserved characters seems to be helping. Next up
entails two issues: more robust testing of escaped characters, and
trying to discover what is the best approach to dealing with
characters that must be escaped to get past XML, e.g. '<', '>', and
Michael,
I don't think you misunderstood. I will soon give a full response here, but
am on the road at the moment.
Many thanks
Jack
On Friday, February 22, 2013, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> My mistake, I misunderstood the problem.
>
> Michael Della Bitta
>
>
I have a multi-value stored field called "details"
I've been deliberately sending it values like
If I fetch a document with that field at the admin query console,
using XML, I get:
If I fetch with JSON, I get:
"details": [
""
],
Even more curious, if I
Hi Vinay,
Perhaps you could say more about what you are looking for? What use cases, say.
Did you see the book _Taming Text_?
Thanks
Jack
On Fri, Feb 22, 2013 at 8:48 AM, Vinay B, wrote:
> Hi,
>
> A few questions, some specific to UIMA, others more general.
> 1. The SOLR/UIMA example employs 3r
Marcelo
In some sense, it sounds like you are aiming at building a topic map
of all your resources.
Jack
On Thu, Feb 21, 2013 at 11:54 AM, Marcelo Elias Del Valle
wrote:
> Hello David,
>
> First of all, thanks for answering!
>
> 2013/2/21 David Quarterman
>
>> Looked through your site and
eb 21, 2013 at 8:52 AM, Timothy Potter wrote:
> Weird - the only difference I see is that we us XML vs. JSON, but
> otherwise, doing the following works for us:
>
> VALU1
> VALU2
>
> Result would be:
>
>
> VALU1
> VALU2
>
>
>
> On Thu, Feb 21, 2013
was a bug for this fixed for 4.1 - which version are you on? I
> remember this b/c I was on 4.0 and had to upgrade for this exact
> reason.
>
> https://issues.apache.org/jira/browse/SOLR-4134
>
> Tim
>
> On Wed, Feb 20, 2013 at 9:16 PM, Jack Park wrote:
>> From what I
>From what I can read about partial updates, it will only work for
singleton fields where you can set them to something else, or
multi-valued fields where you can add something. I am testing on 4.1
I ran some tests to prove to me that you cannot do anything else to a
multi-valued field, like remov
Hi Fergus,
Would it make sense to you to switch to the Apache 2 license so that
your project can "play nice" in the apache ecosystem?
Thanks
Jack
On Sun, Feb 17, 2013 at 6:25 AM, Fergus McDowall
wrote:
> Erik
>
> Thanks for the great feedback. It fills me with joy to know that another
> human b
Say you have a dozen servers, one core each. Say you wish to add an
agent reference inside the solrconfig update response descriptor.
Would you do that for every core?
Thanks in advance.
Jack
; Solr & ElasticSearch Support
> http://sematext.com/
>
>
>
>
>
> On Mon, Jan 21, 2013 at 3:06 PM, Jack Park wrote:
>
>> Here is a situation I now experience:
>>
>> What Solr has:
>>
Here is a situation I now experience:
What Solr has:
economist and thus …@en
What was sent:
economist and thus …@en
where those are just snippets from what I sent up -- the ellipsis wa
someserver.org/something";
>
> -- Jack Krupansky
>
> -Original Message- From: Jack Park
> Sent: Monday, January 21, 2013 1:41 PM
> To: solr-user@lucene.apache.org
> Subject: When a URL is a component of a query string's data?
>
>
> There exists in my Solr index a docum
There exists in my Solr index a document (several, actually) which
harbor http:// URL values. Trying to find documents with a particular
URL fails.
The query is like this:
ResourceURLPropertyType:http://someserver.org/something
Fails due to the second ":"
If I substitute %3a into that query, e.g
Similar thoughts: I used unit tests to explore that issue with SolrJ,
originally encoding with ClientUtils; The returned results had "|"
many places in the text, with no clear way to un-encode. I eventually
ran some tests with no encoding at all, including strings like
"hello & goodbye"; such strin
zulu string back to a Date object
as needed.
Seems to be working fine now.
Many thanks
Jack
On Sat, Jan 12, 2013 at 10:52 PM, Shawn Heisey wrote:
> On 1/12/2013 7:51 PM, Jack Park wrote:
>>
>> My work engages SolrJ, with which I send documents off to Solr 4 which
>> properl
My work engages SolrJ, with which I send documents off to Solr 4 which
properly store, as viewed in the admin panel, as this example:
2013-02-04T02:11:39.995Z
When I retrieve a document with that date, I use the SolrDocument
returned as a Map in which the date now looks like
this:
Sun Feb 03 18:11
Hi Chris,
Your suspicion turned out to be spot on with a code glitch.
The history of this has been due to a fairly weak understanding of how
partial update works. The first code error was just a simple,stupid
one in which I was not working against a "current" copy of the
document. But, when I got
I am running against a networked Solr4 installation -- but not using
any of the cloud apparatus. I wish to update a document (Node) with
new information. I send back as a partial update using SolrJ's add()
command
document id
the new or updated field
version number precisely as it was fetched
What
I already have a handful of solr instances running . However, I'm
trying to install solr (1.4) on a new linux server with tomcat using a
context file (same way I usually do):
However it throws an exception due to the following:
SEVERE: Could not start SOLR. Check solr/home propert
al Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Monday, September 21, 2009 3:42 PM
To: solr-user@lucene.apache.org
Subject: Re: what is too large for an indexed field
Park, Michael wrote:
> I am trying to place the value of around 390,000 characters into a
> single fie
large for an indexed field
On Mon, Sep 21, 2009 at 3:27 PM, Park, Michael wrote:
> I am trying to place the value of around 390,000 characters into a
> single field. However, my search results have become inaccurate.
Do you mean that the document should score higher, or that the
do
Will I need to use Solr 1.3 with the EdgeNGramFilterFactory in order to
get the autosuggest feature?
-Original Message-
From: Chris Hostetter [mailto:[EMAIL PROTECTED]
Sent: Monday, November 12, 2007 1:05 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr + autocomplete
: "Error loadi
Thanks Ryan,
This looks like the way to go. However, when I set up my schema I get,
"Error loading class 'solr.EdgeNGramFilterFactory'". For some reason
the class is not found. I tried the stable 1.2 build and even tried the
nightly build. I'm using "".
Any suggestions?
Thanks,
Mike
-Or
-Bharani
Park, Michael wrote:
>
> Thanks! That's a good suggestion too. I'll look into that.
>
> Actually, I was hoping someone had used a reliable JS library that
> accepted JSON.
>
> -Original Message-
> From: Ryan McKinley [mailto:[EMAIL PROTECTED]
Thanks! That's a good suggestion too. I'll look into that.
Actually, I was hoping someone had used a reliable JS library that
accepted JSON.
-Original Message-
From: Ryan McKinley [mailto:[EMAIL PROTECTED]
Sent: Monday, October 15, 2007 4:44 PM
To: solr-user@lucene.apache.org
Subject:
Hi Chris,
No. I set up a separate file, same as the wiki.
It's either a tomcat version issue or a difference between how tomcat on
my Win laptop is configured vs. the configuration on our tomcat Unix
machine.
I intend to run multiple instances of solr in production and wanted to
use the cont
I've found the problem.
The Context attribute path needed to be set:
-Original Message-
From: Park, Michael [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 05, 2007 5:28 PM
To: solr-user@lucene.apache.org
Subject: tomcat context fragment
Hello All,
I've been wo
Hello All,
I've been working with solr on Tomcat 5.5/Windows and had success
setting my solr home using the context fragment. However, I cannot get
it to work on Tomcat 5.028/Unix. I've read and re-read the Apache
Tomcat documentation and cannot find a solution. Has anyone run into
this issu
73 matches
Mail list logo