in Ralph's talk.
I do not have a more concrete pointer, sorry, and I would love to read
something more concretely closer to solr about them.
Paul
Le 12 sept. 2012 à 00:46, Jack Krupansky a écrit :
> My standard question for such a situation: How are you expecting your users
> to quer
Isn't XSLT the bottleneck here?
I have not yet met an incremental XSLT processor, although I heard XSLT 1
claimed it could be done in principle.
If you start to do this kind of processing, I think you have no other choice
than write your own output method.
Paul
Le 12 sept. 2012 à
Indexing is not happening after 'x' documents.
I am using Bitnami and had upgraded Mysql server from Mysql 5.1.* to Mysql
5.5.* version. After up gradation when I ran indexing on solr, it not get
indexed.
I am using a procedure in which i am finding the parent of a child and
inserting it in a
ample.
When in this situation, I generally make
Raw data: $sentence of class $sentence.getClass()
(note: class is not a bean property, you need the method call)
Hope it helps.
Paul
PS: to stop this hell, I have a JSP pendant to the VelocityResponseWriter, is
this something of interest for someo
Hi,
I am using mysql for solr indexing data in solr. I have two fields: "name"
and "college". How can I add auto suggest based on these two fields?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Autocomplete-tp4013859.html
Sent from the Solr - User mailing list archive
next (and
the five next?) and pushing down some results if they are "too" similar to
previous ones.
Hope I am being clear.
Paul
My experience for the easiest query is solr/itas (aka velocity solr).
paul
Le 22 oct. 2012 à 11:15, Muwonge Ronald a écrit :
> Hi all,
> have done some crawls for certain urls with nutch and indexed them to
> solr.I kindly request for assistance in getting the best search
> interf
, see the
Solr tutorial). You'd find a clue when it gets dropped.
Paul
Le 30 oct. 2012 à 18:47, Joe Corneli a écrit :
> Hi Dave,
>
> Thanks for your offer to help!
>
> I moved the original post to a support request here:
> http://drupal.org/node/1827260
>
> (Notin
Le 30 oct. 2012 à 20:30, Joe Corneli a écrit :
> select?q=... directly in Solr.
What's in there?
Are MathML islands gone?
paul
So I guess
> We call hyperbolic a loxodromic transformation that has a single
> fixed point.
Also becomes
We call hyperbolic a loxodromic transformation that has a single fixed point.
?
In this case, it's definitely the Drupal side doing the html-stripping.
paul
Le 30 oct. 2
Here is what we do for Curriki.org:
We run a background indexing by setting up another solr that performs all the
indexing.
Then we can start the install process.
Then we can update the index with the things changed since the background
indexing.
paul
Le 1 nov. 2012 à 21:46, adityab a écrit
Hello Wojtek,
I don't want to discourage all the famous CMSs around nor solr uptake
but xwiki is quite a powerful CMS and has a search that is lucene based.
paul
Le 07-août-09 à 22:42, Olivier Dobberkau a écrit :
I've been asked to suggest a framework for managing a website
I don't want to join yet another mailing list or register for JIRA,
but I just noticed that the Javadocs for
SolrInputDocument.addField(String name, Object value, float boost) is
incredibly wrong - it looks like it was copied from a "deleteAll"
method.
--
http://www.linkedin.com/in/paultomblin
Which versions of Lucene, Nutch and Solr work together? I've
discovered that the Nutch trunk and the Solr trunk use wildly
different versions of the Lucene jars, and it's causing me problems.
--
http://www.linkedin.com/in/paultomblin
If I put an object into a SolrInputDocument and store it, how do I
query for it back? For instance, I stored a java.net.URI in a field
called "url", and I want to query for all the documents that match a
particular URI. The query syntax only seems to allow Strings, and if
I just try query.setQuer
On Mon, Aug 17, 2009 at 5:28 PM, Harsch, Timothy J. (ARC-SC)[PEROT
SYSTEMS] wrote:
> Assuming you have written the SolrInputDocument to the server, you would next
> query.
I'm sorry, I don't understand what you mean by "you would next query."
There appear to be some words missing from that sente
On Mon, Aug 17, 2009 at 5:30 PM, Ensdorf Ken wrote:
> You can escape the string with
>
> org.apache.lucene.queryParser.QueryParser.escape(String query)
>
> http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/queryParser/QueryParser.html#escape%28java.lang.String%29
>
Does this mean I should
On Mon, Aug 17, 2009 at 5:36 PM, Ensdorf Ken wrote:
>> Does this mean I should have converted my objects to string before
>> writing them to the server?
>>
>
> I believe SolrJ takes care of that for you by calling toString(), but you
> would need to convert explicitly when you query (and then esca
On Mon, Aug 17, 2009 at 5:47 PM, Paul Tomblin wrote:
> Hmmm. It's not working right. I've added a 5 documents, 3 with the
> URL set to "http://xcski.com/pharma/"; and 2 with the URL set to
> "http://xcski.com/nano/";. Doing other sorts of queries seems to
I've got "text" and so if I
do an unqualified search it only finds in the field text. If I want
to search title, I can do "title:foo", but what if I want to find if
the search term is in any field, or if it's in "text" or "title" or
"concept" or "keywords"? I already tried "*:foo", but that throw
So if I want to make it so that the default search always searches
three specific fields, I can make another field multi-valued that they
are all copied into?
On Tue, Aug 18, 2009 at 10:46 AM, Marco Westermann wrote:
> I would say, you should use the copyField tag in the schema. eg:
>
>
>
> the t
On Tue, Aug 18, 2009 at 11:04 AM, Marco Westermann wrote:
> exactly! for example you could create a field called "all". And you copy
> your fields to it, which should be searched, when all fields are searched.
>
Awesome, that worked great. I made my "all" field 'stored="false"
indexed="true"' and
I'm trying to sort, but I am not always getting the correct results and
I'm not sure where to start tracking down the problem.
You can see the problem here (at least until it's fixed!):
http://nines.performantsoftware.com/search/saved?user=paul&name=poem
If you sort by T
On Wed, Aug 19, 2009 at 2:43 PM, Fuad Efendi wrote:
> Most probably Ctrl-C is graceful for Tomcat, and kill -9 too... Tomcat is
> smart... I prefer "/etc/init.d/my_tomcat" wrapper around catalina.sh ("su
> tomcat", /var/lock etc...) - ok then, Graceful Shutdown depends on how you
> started Tomcat.
Erik Hatcher wrote:
On Aug 19, 2009, at 2:45 PM, Paul Rosen wrote:
You can see the problem here (at least until it's fixed!):
http://nines.performantsoftware.com/search/saved?user=paul&name=poem
Hi Paul - that project looks familiar! :)
Hi Erik! I should hope so! And I'
Is there such a thing as a wildcard search? If I have a simple
solr.StrField with no analyzer defined, can I query for "foo*" or
"foo.*" and get everything that starts with "foo" such as 'foobar" and
"foobaz"?
--
http://www.linkedin.com/in/paultomblin
On Thu, Aug 20, 2009 at 10:51 AM, Andrew Clegg wrote:
> Paul Tomblin wrote:
>>
>> Is there such a thing as a wildcard search? If I have a simple
>> solr.StrField with no analyzer defined, can I query for "foo*" or
>> "foo.*" and get everyth
Is Solr like a RDBMS in that I can have multiple programs querying and
updating the index at once, and everybody else will see the updates
after a commit, or do I have to something explicit to see others
updates? Does it matter whether they're using the web interface,
SolrJ with a
CommonsHttpSolrS
ur yearly reindexing.
So, the question is:
Is there any way to get the info that is already in the solr index for a
document, so that I can use that as a starting place? I would just tweak
that record and add it again.
Thanks,
Paul
On Thu, Aug 27, 2009 at 1:27 PM, Eric
Pugh wrote:
> You can just query Solr, find the records that you want (including all
> the website data). Update them, and then send the entire record back.
>
Correct me if I'm wrong, but I think you'd end up losing the fields
that are indexed but not stored.
solr because I just can't create the
record the same way as I originally did.
(Besides the time involved in crawling all those websites, some of them
only allow us access for a limited amount of time, so to reindex, we
need to call them up and schedule a time for them to whitelist us.)
I'd really appreciate some help. Thanks in advance.
Paul
sets together outside of Solr.
Eric
On Thu, Aug 27, 2009 at 1:46 PM, Paul Tomblin wrote:
On Thu, Aug 27, 2009 at 1:27 PM, Eric
Pugh wrote:
You can just query Solr, find the records that you want (including all
the website data). Update them, and then send the entire record back.
Correct me
Can I get all the distinct values from the Solr "database", or do I
have to select everything and aggregate it myself?
--
http://www.linkedin.com/in/paultomblin
I've loaded some data into my solr using the embedded server, and I
can see the data using Luke. I start up the web app, and it says
>cwd=/Users/ptomblin/apache-tomcat-6.0.20
>SolrHome=/Users/ptomblin/src/lucidity/solr/
I hit the "schema" button and it shows the correct schema. However,
if I t
On Thu, Aug 27, 2009 at 9:24 PM, Paul Tomblin wrote:
>>cwd=/Users/ptomblin/apache-tomcat-6.0.20
>>SolrHome=/Users/ptomblin/src/lucidity/solr/
>
Ok, I've spotted the problem - while SolrHome is in the right place,
it's still looking for the data in
/Users/ptomblin/apach
Yesterday or the day before, I asked specifically if I would need to
restart the Solr server if somebody else loaded data into the Solr
index using the EmbeddedServer, and I was told confidently that no,
the Solr server would see the new data as soon as it was committed.
So today I fired up the Sol
On Fri, Aug 28, 2009 at 6:42 AM, Shalin Shekhar
Mangar wrote:
>> Ok, I've spotted the problem - while SolrHome is in the right place,
>> it's still looking for the data in
>> /Users/ptomblin/apache-tomcat-6.0.20/solr/data/
>>
>> How can I changed that?
>>
>>
> One easy way is to hard code that loca
On Fri, Aug 28, 2009 at 8:04 AM, Chantal
Ackermann wrote:
> Paul Tomblin schrieb:
>> The conf file says:
>> ${solr.data.dir:./solr/data}
>> That indicates to me that there is some way to override that default
>> ./solr/data involving something called solr.data.dir, bu
On Thu, Aug 27, 2009 at 11:36 PM, Ryan McKinley wrote:
> Say you have an embedded solr server and an http solr server pointed to the
> same location.
> 1. make sure only is read only! otherwise you can make a mess.
> 2. calling commit on the embedded solr instance, will not have any effect on
> t
On Fri, Aug 28, 2009 at 1:12 PM, Israel Ekpo wrote:
> Is the Solr wiki down?
>
There's a very useful web page for these questions:
http://downforeveryoneorjustme.com/
It confirms that yes, the wiki is down. I'm currently using the
Google cache to read the pages I need.
--
http://www.linkedin.
I'm trying to instantiate multiple cores. Since nothing is different
between the two cores except the schema and the data dir, I was hoping
to share the same instanceDir. Solr seems to recognize that there are
two cores, and gives me two different admin pages. But unfortunately
both the admin pa
That sounds very similar to my use case, too. (Mentioned in the recent
thread "Updating a solr record"). So +1 on allowing updates!
Jason Rutherglen wrote:
Don,
I started work on fixing this a while back. However I plan to
resume again soon. Basically one would be able to update fields
to a pa
Slightly off topic, but I'm getting tired of hitting the 'view source' keyboard
shortcut every time I do a solr query. Is there a way to make Safari display
xml as-is?
-- Sent from my Palm Prē
Every document I put into Solr has a field "origScore" which is a
floating point number between 0 and 1 that represents a score assigned
by the program that generated the document. I would like it that when
I do a query, it uses that origScore in the scoring, perhaps
multiplying the Solr score to
Gert,
we're doing a similar process on i2geo search, including simple
language expansion (one word is queried in several fields of each
language), and, though I haven't made it yet but will soon, I've been
suggested to do it as qparser plugin.
paul
Le 05-sept.-09 à
7; the input 'q=software'
with 'q=software OR program OR computer OR system OR package'.
Exactly.
The fact that you can master all the query classes is good luxury
also, e.g. to do fine-grained queries without being worried about
escapes by using once again a query-parse
n't necessarily wedded to using solr-ruby-rails-0.0.5, but I
looked at rsolr very briefly and didn't see any reference to multicore
there, either.
I can certainly hack something together, but it seems like this is a
common problem.
How are others doing multicore from ruby?
Thanks,
Paul
I'm trying to delete using SolJ's "deleteByQuery", but it doesn't like
it that I've added an "fq" parameter. Here's what I see in the logs:
Sep 9, 2009 1:46:13 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.lucene.queryParser.ParseException: Cannot parse
'url:http\:\/\/xcski\.com\
On Wed, Sep 9, 2009 at 2:07 PM, AHMET ARSLAN wrote:
> --- On Wed, 9/9/09, Paul Tomblin wrote:
>> SEVERE: org.apache.lucene.queryParser.ParseException:
>> Cannot parse
>> 'url:http\:\/\/xcski\.com\/pharma\/&fq=category:pharma':
>
>> Should
an otherwise
completely live environment.
3) There are a few more similar uses that might be coming, but I think
the main point is to be able to query on one or the other cores, or
both, or possibly a third one in the future.
Thanks,
Paul
lhost:8983/solr/core0,localhost:8983/solr/core1')
because it has a "?" in it.
Erik Hatcher wrote:
With solr-ruby, simply put the core name in the URL of the
Solr::Connection...
solr = Solr::Connection.new('http://localhost:8983/solr/core_name')
Erik
On Sep 9,
index
exhibits
index
reindex_resources
index
start.jar
-
And have all the cores share everything except an index.
How would I set that up?
Are there differences between 1.3 and 1.4 in this respect?
Thanks,
Paul
Ok. I have a workaround for now. I've duplicated the conf folder three
times and changed this line in solrconfig.xml in each folder:
${solr.data.dir:./solr/exhibits/data}
I can't wait for solr 1.4!
Noble Paul നോബിള് नोब्ळ् wrote:
the dataDir is a Solr1.4 feature
On Thu, Sep 10,
_fields' => { 'myfacet' => [ etc...], etc... }
which is what I expect.
If I add the ":shards => @cores" back in (so that I'm doing the exact
request above), I get:
'facet_counts' => {
'facet_dates' => {},
'facet_queries' => {},
'facet_fields' => {}
so I've lost my facet information.
Why would it correctly find my documents, but not report the facet info?
Thanks,
Paul
Can somebody point me to some sample code for using highlighting in
SolrJ? I understand the highlighted versions of the field comes in a
separate NamedList? How does that work?
--
http://www.linkedin.com/in/paultomblin
List highightSnippets =
> queryResponse.getHighlighting().get(id).get("content");
> }
> }
>
> Hope that gets you what you need.
>
> -Jay
> http://www.lucidimagination.com
>
> On Thu, Sep 10, 2009 at 3:19 PM, Paul Tomblin wrote:
>
>> Can s
w to set highlighting
> params and how to get back a List of highlighting results.
>
> -Jay
> http://www.lucidimagination.com
>
>
> On Thu, Sep 10, 2009 at 5:40 PM, Paul Tomblin wrote:
>
>> If I set snippets to 9 and "mergeContinuous" to true, will I get
&
Thanks to Jay, I have my code doing what I need it to do. If anybody
cares, this is my code:
SolrQuery query = new SolrQuery();
query.setQuery(searchTerm);
query.addFilterQuery(Chunk.SOLR_KEY_CONCEPT + ":" + concept);
query.addFilterQuery(Chunk.SOLR_KEY_CATEGORY +
no
results are returned.
I believe that the + is being stripped somehow but im not sure where
exactly to look.
I included the debug info from the query but im not sure if the output
is helpfull.
Does anyone have ideas on this issue, and how i should try to proceed?
Many thanks,
Paul
Shalin Shekhar Mangar wrote:
On Fri, Sep 11, 2009 at 2:35 AM, Paul Rosen wrote:
Hi again,
I've mostly gotten the multicore working except for one detail.
(I'm using solr 1.3 and solr-ruby 0.0.6 in a rails project.)
I've done a few queries and I appear to be able to get hits f
this.
From the docs it should just be a matter of escaping:
I believe that the + is being stripped somehow but im not
sure where exactly to look.
I think your analyzer is eating up +, which tokenizer are you using
in it?
Do you want to return documents containing 'product+' by search
d reindexing but i get the same result. Nothing is found.
I assume one of the filters could be adjusted to keep the '+'.
Weird thing is that i tried to remove all filters from the analyzer
and i get the same result.
Paul
On 14 Sep 2009, at 15:17, AHMET ARSLAN wrote:
Hi Ahmet,
get most of its functionality
without it absorbing the + signs? Will i loose a lot of 'good'
functionality by removing it? 'preserveOriginal' sounds promising and
seems to work but is it a good idea to use this?
On 14 Sep 2009, at 16:16, AHMET ARSLAN wrote:
--- O
transform the + into something else, and do this back and forwards to
get a match!
Hopefully this will be a standard solr install, but with this tweak
for escaped chars....
Paul
On 14 Sep 2009, at 17:01, Erick Erickson wrote:
Before you go too much further with this, I've just got to a
Interesting. I thought that would be the 'hard' approach rather than
add a filter, but i guess thats all it really is anyway.
Has this been done before? Build a filter to transform a word there
and back?
On 14 Sep 2009, at 17:17, Chantal Ackermann wrote:
Paul Forsyth schrieb:
ossible?
Also, if this is insurmountable, I've discovered two show stoppers that
will prevent using multicore in my project (counting the lack of support
for faceting in multicore). Are these issues addressed in solr 1.4?
Thanks,
Paul
Shalin Shekhar Mangar wrote:
On Tue, Sep 15, 2009 at 2:39 AM, Paul Rosen wrote:
I've done a few experiments with searching two cores with the same schema
using the shard syntax. (using solr 1.3)
My use case is that I want to have multiple cores because a few different
people will be man
Fergus,
Implementing wildcard (//tagname) is definitely possible. I would love
to see it working. But if you wish to take a dig at it I shall do
whatever I can to help.
>What is the use case that makes flow though so useful?
We do not know to which forEach xpath a given field is associated with
If I do a query for a couple of words in quotes, Solr correctly only returns
pages where those words appear exactly within the quotes. But the
highlighting acts as if the words were given separately, and stems them and
everything. For example, if I search for "knee pain", it returns a document
th
On Thu, Sep 24, 2009 at 7:04 PM, Koji Sekiguchi wrote:
> Set hl.usePhraseHighlighter parameter to true:
>
> http://wiki.apache.org/solr/HighlightingParameters#hl.usePhraseHighlighter
>
>
That seems to have done it. Thanks.
--
http://www.linkedin.com/in/paultomblin
Sorry for the long delay in responding, but I've just gotten back to
this problem...
I got the solr 1.4 nightly and the problem went away, so I guess it is a
solr 1.3 bug.
Thanks for all the input!
Lance Norskog wrote:
Paul, can you create an HTTP url that does this exact query?
Sorry about asking this here, but I can't reach wiki.apache.org right now.
What do I set in query.setMaxRows() to get all the rows?
--
http://www.linkedin.com/in/paultomblin
Sorry, in my last question I meant setRows not setMaxRows. Whay do I pass to
setRows to get all matches, not just the first 10?
-- Sent from my Palm Prē
When I do a query directly form the web, the XML of the response
includes how many results would have been returned if it hadn't
restricted itself to the first 10 rows:
For instance, the query:
http://localhost:8080/solrChunk/nutch/select/?q=*:*&fq=category:mysites
returns:
0
0
*:*
category:mys
Nope, that just gets you the number of results returned, not how many
there could be. Like I said, if you look at the XML returned, you'll
see something like
but only 10 returned. getNumFound returns 10 in that case, not 1251.
2009/10/2 Noble Paul നോബിള് नोब्ळ् :
> QueryResponse#ge
On Fri, Oct 2, 2009 at 3:13 PM, Shalin Shekhar Mangar
wrote:
> On Fri, Oct 2, 2009 at 8:11 PM, Paul Tomblin wrote:
>
>> Nope, that just gets you the number of results returned, not how many
>> there could be. Like I said, if you look at the XML returned, you'll
>>
e queries will be much faster if it isn't returned.
Thanks,
Paul
09 AM, Paul Tomblin <ptomb...@xcski.com> wrote:
> >>
> > Nope. Check again. getNumFound will definitely give you 1251.
> > SolrDocumentList#size() will give you 10.
>
> I don't have to check again. I put this log into my query code:
>Qu
On Fri, Oct 2, 2009 at 5:04 PM, Shalin Shekhar Mangar
wrote:
> Can you try this with the Solrj client
> in the official 1.3 release or even trunk?
I did a svn update to 821188 and that seems to have fixed the problem.
(The jar files changed from -1.3.0 to -1.4-dev) I guess it's been
longer sinc
ldAliasesAndGlobsInParams
On Fri, Oct 2, 2009 at 1:27 PM, Paul Rosen wrote:
Hi,
Is there a way to request all fields in an object EXCEPT a particular one?
In other words, the following pseudo code is what I'd like to express:
req = Solr::Request::Standard.new(:start => page*size,
w much slower that would be than doing the merge (how long
does that take?), then going through the entire index and eliminating
duplicates.
Anyway, any advice would be appreciated,
Paul
Shalin Shekhar Mangar wrote:
The path on the wiki page was wrong. You need to use the adminPath in the
url. Look at the adminPath attribute in solr.xml. It is typically
/admin/cores
So the correct path for you would be:
http://localhost:8983/solr/admin/cores?action=mergeindexes&core=merged&ind
olr/merged}
status=0 QTime=19
It looks to me like the string should have "&sort=title_sort+asc"
instead of ";title_sort_asc" tacked on to the query, but I'm not sure
about that.
Any clues what I'm doing wrong?
Thanks,
Paul
(using solr 1.4 nightly; solr-ruby 0.0.7)
I am attempting to do an auto-complete with the following statement:
req = Solr::Request::Standard.new(
:start => 0, :rows => 0, :shards => [ 'resources', 'exhibits'],
:query => "*:*", :filter_queries => [ ],
:facets => {:fields => [ "content" ], :min
Thank you Yonik, that worked (I added :method => :enum to the :facets hash).
And it seems to work really fast, too.
Yonik Seeley wrote:
Hi Paul,
The new faceting method is faster in the general case, but doesn't
work well for faceting full text fields (which tends not to work well
re
tor]
hash[:df] = @params[:default_field]
Does this make sense? Should this be changed in the next version of the
solr-ruby gem?
Paul Rosen wrote:
Hi all,
I'm using solr-ruby 0.0.7 and am having trouble getting Sort to work.
I have the following statement:
req = Solr::Request::St
e option for the query to adjust the length of the returned
string...
Thanks in advance,
Paul Forsyth
Thanks Grant,
I'm still a bit of a newbie with Solr :)
I was able to add a new non-stemming field along with a copyfield, and
that seems to have done the trick :)
Until i tried this i didnt quite realise what copyfields did...
Thanks again,
Paul
On 19 Oct 2009, at 11:23, Paul Fo
Is there any difference to the relevancy score for a document that has
been added directly to an index vs. the same document that got into the
index because of a merge?
In other words, I'd like to build my index in pieces (since people in
different cities will be working on parts of it), but I
Not with solr but with Lucene, there is the project called
semanticvectors.
It would be cute to make it a solr module.
paul
Le 30-oct.-09 à 09:17, György Frivolt a écrit :
Hi,
Does anyone of you have experiences with using LSA, Latent Semantic
Analysis with Solr? I would like to search
Am I right in thinking that a document that the sortable field is only
two sentences long and contains the search term once will score higher
than one that is 50 sentences long that contains the search term 4
times? Is there a way to change it to score higher based only on
number of hits?
--
ht
:48 AM, Paul Tomblin wrote:
>> Am I right in thinking that a document that the sortable field is only
>> two sentences long and contains the search term once will score higher
>> than one that is 50 sentences long that contains the search term 4
>> times?
>
> Yep. Assu
I was looking at the script in example/exampledocs to feed documents
to the server.
Just to see if it was possible, I took one of the documents that I've
previously indexed using SolrJ, and I tried to feed it directly to the
Solr server using the following command:
curl http://localhost:8697/solr
>
> -Yonik
> http://www.lucidimagination.com
>
>
>
> On Sat, Oct 31, 2009 at 10:37 AM, Paul Tomblin wrote:
>> I was looking at the script in example/exampledocs to feed documents
>> to the server.
>>
>> Just to see if it was possible, I took one of the documents that I'v
On Sat, Oct 31, 2009 at 11:08 AM, Yonik Seeley
wrote:
> I personally think it would be cleaner to allow a post of just a
> (or multiple with a surrounding tag), esp now that we can put
> modifiers in the URL.
Exactly. The action should be in the url.
>
> For now, just use shell scripting I gue
In an earlier message, Yonik suggested that I use omitNorms="true" if
I wanted the length of the document to not be counted in the scoring.
The documentation also mentions that it omits "index-time boosting".
What does that mean?
--
http://www.linkedin.com/in/paultomblin
http://careers.stackoverf
Are there any pitfalls to storing an arbitrary text file in the same
directory as the solr index?
We're slinging different versions of the index around while we're
testing and it's hard to keep them straight.
I'd like to put a readme.txt file in the directory that contains some
history about
If I want to do a query and only return X number of rows at a time,
but I want to keep querying until I get all the row, how do I do that?
Can I just keep advancing query.setStart(...) and then checking if
server.query(query) returns any rows? Or is there a better way?
Here's what I'm thinking
On Mon, Nov 2, 2009 at 8:40 PM, Avlesh Singh wrote:
>>
>> final static int MAX_ROWS = 100;
>> int start = 0;
>> query.setRows(MAX_ROWS);
>> while (true)
>> {
>> QueryResponse resp = solrChunkServer.query(query);
>> SolrDocumentList docs = resp.getResults();
>> if (docs.size() == 0)
>> br
401 - 500 of 1658 matches
Mail list logo