On Sat, Jul 11, 2009 at 11:25 PM, Michael Lugassy wrote:
> Hi guys --
>
> Using solr 1.4 functions at query-time, can I dynamically boost
> certain documents which are: a) not on the same range, i.e. have very
> different document ids,
Yes.
> b) have different boost values,
Yes.
> c) part
I read somewhere that it is deprecated
On Sat, Jul 11, 2009 at 2:00 AM, Matt Mitchell wrote:
>
> I'm experimenting with Solr components. I'd like to be able to use a
> nice-high-level querying interface like the DirectSolrConnection or
> EmbeddedSolrServer provides. Would it be considered absolutely insane to
> use
> one of those *wit
HI,
i m using nutch to crawl and solr to index the document.
i want to delete the index containing a perticular word or pattern in url
field.
Is there something like Prune Index tool in solr?
thanx in advance
Beats
be...@yahoo.com
--
View this message in context:
http://www.nabble.com/Delet
Gargate, Siddharth wrote:
I read somewhere that it is deprecated
Yeah, as long as you explicitly use 'lucenePlusSort' parser via defType
parameter:
q=*:*;id desc&defType=lucenePlusSort
Koji
On Mon, Jul 13, 2009 at 6:34 AM, Beats wrote:
>
> HI,
>
> i m using nutch to crawl and solr to index the document.
>
> i want to delete the index containing a perticular word or pattern in url
> field.
>
> Is there something like Prune Index tool in solr?
>
> thanx in advance
>
> Beats
> be...@ya
Hi,
If while using Solr, what would the behaviour be like if we perform the
search and we get more than one million hits
Regards,
Raakhi
On Jul 13, 2009, at 4:58 AM, Gargate, Siddharth wrote:
I read somewhere that it is deprecated
see the 2nd paragraph in CHANGES.txt:
http://svn.apache.org/repos/asf/lucene/solr/trunk/CHANGES.txt
Erik
You can delete by query - url:some-word
Erik
On Jul 13, 2009, at 6:34 AM, Beats wrote:
HI,
i m using nutch to crawl and solr to index the document.
i want to delete the index containing a perticular word or pattern
in url
field.
Is there something like Prune Index tool in solr?
Hello,
I'm using SolrJ on a Tomcat environment with a proxy configured in the
catalina.properties
http.proxySet=true
http.proxyPort=8080
http.proxyHost=XX.XX.XX.XX
My CommonsHttpSolrServer does not seem to use the configured proxy, this
results in a " java.net.ConnectException: Connection refu
Hey Shalin,
My thought was to build a component using DSC, but not for the actual
client. The client (for this application) would still be using an HTTP
connection. I'm happy to hear this approach is valid. If I need efficiency,
I can still use the lower level objects.
Do you or anyone else know
It depends (tm) on what you try to do with the results. You really need togive
us some more details on what you want to *do* with 1,000,000 hits
before any meaningful response is possible.
Best
Erick
On Mon, Jul 13, 2009 at 8:47 AM, Rakhi Khatwani wrote:
> Hi,
> If while using Solr, what wo
Hi,
I'm in the process of making a javascriptless web interface to Solr (the
nice ajax-version will be built on top of it unobtrusively). Our
database has a lot of fields and so I've grouped those with similar
characteristics to make several different 'widgets' (like a numerical
type which ge
Thanks for this -- we're also trying out bobo-browse for Lucene, and
early results look pretty enticing. They greatly sped up how fast you
read in documents from disk, among other things:
http://bobo-browse.wiki.sourceforge.net/
On Sat, Jul 11, 2009 at 12:10 AM, Shalin Shekhar
Mangar wrote:
> On S
SOLR 1.4 has a new feature
https://issues.apache.org/jira/browse/SOLR-475that speeds up faceting
on fields with many terms by adding
an UnInvertedField.
Bobo uses a custom field cache as well. It may be useful to benchmark the 3
different approaches (bitsets, SOLR-475, Bobo). This could be a good w
Hi all,
When I'm using the TermVectorComponent I receive term vectors with all
tokens in the documents that meet my search criteria. I would be
interested in getting the offsets for just those terms in the documents
that meet the search citeria. My documents are about 200 K and are in
XML. If
Does Solr have the ability to do subqueries, like this one (in SQL):
SELECT id, first_name
FROM student_details
WHERE first_name IN (SELECT first_name
FROM student_details
WHERE subject= 'Science');
If so, how performant is this kind of queries?
--
View this message in context:
http://www.nab
Hi,
We have a solr index of size 626 MB and number of douments indexed are
141810. We have configured index based spellchecker with buildOnCommit
option set to true. Spellcheck index is of size 8.67 MB.
We use data import handler to create the index from scratch and also to
update the index period
On Mon, Jul 13, 2009 at 7:56 PM, gwk wrote:
>
> Is there a good way to select the top X facets and include some terms you
> want to include as well something like
> facet.field=country&f.country.facet.limit=X&f.country.facet.includeterms=Narnia,Guilder
> or is there some other way to achieve this
Ok, thanks. I played with it enough to to get plain text out at least,
but I'll wait for the resolution of SOLR-284
-Peter
On Sun, Jul 12, 2009 at 9:20 AM, Yonik Seeley wrote:
> Peter, I'm hacking up solr cell right now, trying to simplify the
> parameters and fix some bugs (see SOLR-284)
> A qui
I seem to recall that the Highlighter in Solr is pluggable, so you may
want to work at that level instead of the client side. Otherwise, you
likely would have to implement your own TermVectorMapper and add that
to the TermVectorComponent capability which then feeds your client.
For an exam
I have been getting exceptions thrown when users try to send boolean
queries into the dismax handler. In particular, with a leading 'OR'.
I'm really not sure why this happens - I thought the dsimax parser
ignored AND/OR?
I'm using rev 779609 in case there were recent changes to this. Is
this a k
SolrIndexConfig accepts a mergePolicy class name, however how does one
inject properties into it?
I am new to Solr and trying to get it set up to index files from a
directory structure on a server. I have a few questions.
1.) Is there an application that will return the search results in a
user friendly format?
2.) How do I move Solr from the example environment into a production
environ
It doesn't ignore OR and AND, though it probably should. I think there is a
JIRA issue for it somewhere.
On Mon, Jul 13, 2009 at 4:10 PM, Peter Wolanin wrote:
> I can still generate this error with Solr built from svn trunk just now.
>
> http://localhost:8983/solr/select/?qt=dismax&q=OR+vti+OR+fo
Hi Brad:
We have since (Bobo) added some perf tests which allows you to do some
benchmarking very quickly:
http://code.google.com/p/bobo-browse/wiki/BoboPerformance
Let me know if you need help setting up.
-John
On Mon, Jul 13, 2009 at 10:41 AM, Jason Rutherglen <
jason.rutherg...@gmail
Indeed - I assumed that only the "+" and "-" characters had any
special meaning when parsing dismax queries and that all other content
would be treated just as keywords. That seems to be how it's
described in the dismax documentation?
Looks like this is a relevant issue (is there another)?
https
Hello!
I'm working with Solr-1.3.0 using a sharded index for distributed,
aggregated search. I've successfully run through the example described in
the DistributedSearch wiki page. I have built an index from a corpus of some
50mil documents in an HBase table and created 7 shards using the
org.apac
I can still generate this error with Solr built from svn trunk just now.
http://localhost:8983/solr/select/?qt=dismax&q=OR+vti+OR+foo
I'm doubly perplexed by this since 'or' is in the stopwords file.
-Peter
On Mon, Jul 13, 2009 at 3:15 PM, Peter Wolanin wrote:
> I have been getting exceptions t
Try using LuSql to create the index. It is 4-10 times faster on a
multicore machine, and can run in 1/20th the heap size Solr needs.
See slides 22-25 in this presentation comparing Solr DIH with LuSql:
http://code4lib.org/files/glen_newton_LuSql.pdf
LuSql: http://lab.cisti-icist.nrc-cnrc.gc.ca/ci
Hi,
I'm setting up an embedded solr server from a unit test (the non-bolded
lines are just moving test resources to a tmp directory which is acting as
solor.home.)
final File dir = FileUtils.createTmpSubdir();
*System.setProperty("solr.solr.home", dir.getAbsolutePath());*
I believe that constructor expects to find an alternate format solr config
that specifies the cores, eg like the one you can find in
example/multicore/solr.xml
http://svn.apache.org/repos/asf/lucene/solr/trunk/example/multicore/solr.xml
Looks like that error is not finding the root solr node, so l
The wiki page for merging solr cores
(http://wiki.apache.org/solr/MergingSolrIndexes) mentions that the cores
being merged cannot be indexed to during the merge. What about the core
being merged *to*? In terms of the example on the wiki page, I'm asking
if core0 can add docs while core1 and core2 a
Thanks Grant,
I think I get the idea.
Grant Ingersoll wrote:
I seem to recall that the Highlighter in Solr is pluggable, so you may
want to work at that level instead of the client side. Otherwise, you
likely would have to implement your own TermVectorMapper and add that
to the TermVectorCo
Shall we create an issue for this so we can list out desirable features?
On Sun, Jul 12, 2009 at 7:01 AM, Yonik Seeley wrote:
> On Sat, Jul 11, 2009 at 7:38 PM, Jason
> Rutherglen wrote:
> > Are we planning on implementing caching (docsets, documents, results) per
> > segment reader or is this s
Thanks. I should have googled first. I came across:
http://www.nabble.com/EmbeddedSolrServer-API-usage-td19778623.html
For reference, my code is now:
final File dir = FileUtils.createTmpSubdir();
System.setProperty("solr.solr.home", dir.getAbsolutePath());
final File conf =
Is there a way to set this in SOLR 1.3 using solrconfig? Otherwise one
needs to instantiate a class that statically
calls BooleanQuery.setAllowDocsOutOfOrder?
We're building a spell index from a field in our main index with the
following configuration:
textSpell
default
spell
./spellchecker
true
This works great and re-builds the spelling index on commits as expected.
However, we know there are misspellings in
I don't think there is a way currently, but it might make a nice patch. Or
you could just implement a custom SolrSpellChecker - both
FileBasedSpellChecker and IndexBasedSpellChecker are actually like maybe 50
lines of code or less. It would be fairly quick to just plug a custom
version in as a plug
considering the fact that there are only 20 to 30 docs changed the
indexing is not the bottleneck. Bottleneck is probably the db and the
time taken for the query to run. Are there deltaQueries in the
sub-entities? if you can create a 'VIEW' in DB to identify the delta
it could be faster
On Tue, Ju
Any updates on this?
Cheers.
Gurjot Singh wrote:
>
> Hi, I am curious to know when is the scheduled/tentative release date of
> Solr 1.4.
>
> Thanks,
> Gurjot
>
>
--
View this message in context:
http://www.nabble.com/Solr-1.4-Release-Date-tp23260381p24473570.html
Sent from the Solr - Use
On Tue, Jul 14, 2009 at 1:33 AM, Kevin Miller <
kevin.mil...@oktax.state.ok.us> wrote:
> I am new to Solr and trying to get it set up to index files from a
> directory structure on a server. I have a few questions.
>
> 1.) Is there an application that will return the search results in a
> user fr
42 matches
Mail list logo