Hi Brian,
i don´t find any documentation or howto for the DisMaxQueryHandler?
Regards
Thomas
-Ursprüngliche Nachricht-
Von: Brian Carmalt [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 13. Juni 2008 08:52
An: solr-user@lucene.apache.org
Betreff: Re: AW: AW: My First Solr
The
incorrect
(undefined field text).
Regards Thomas
-Ursprüngliche Nachricht-
Von: Brian Carmalt [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 13. Juni 2008 09:50
An: solr-user@lucene.apache.org
Betreff: Re: My First Solr
http://wiki.apache.org/solr/DisMaxRequestHandler
In solrconfig.xml th
HI Brian
thank you for help. Where are you from?
Regards Thomas
-Ursprüngliche Nachricht-
Von: Brian Carmalt [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 13. Juni 2008 11:43
An: solr-user@lucene.apache.org
Betreff: Re: AW: My First Solr
No, you do not have to reindex. You do have to
erogeneous environment. This extra fields only need additional space
in my index, which is a disadvantage for me.
How can I specify arbitrary xml-elements, which should be indexed in my
one and only field “text”. I have no need of additional fields in my
index.
Any help is appreciated.
Thomas
in my previous post, this approach is especially helpful, if
you have heterogeneous documents with different XML-elements.
Erik Hatcher wrote:
Thomas - you will need to do this client-side if you don't want to use
copyField. The client needs to gather up all the text you want
indexe
- high-lighting of search terms
Any sample code is highly appreciated.
Thanks for you help
Thomas
their hands...hah.
Otis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simpy -- http://www.simpy.com/ - Tag - Search - Share
- Original Message
From: thomas arni <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Monday, March 19, 2007 9:47:51 AM
Subject: Simple Web Interface
Hello
I would like to build a simple
You can configure that in the "schema.xml" file:
Thierry Collogne wrote:
Hello,
I have a small question.When I do a search and enter 2 words,
seperated with
a space (for example small business), the query is done like small OR
business.
So I get results containing small, business or smal
Hi,
I'm also just at that point where I think I need a wildcard facet.field
parameter (or someone points out another solution for my problem...).
Here is my situation:
I have many products of different types with totally different
attributes. There are currently more than 300 attributes
first: sorry for the bad quoting, I found your message in the archive only...
I have many products of different types with totally different
attributes. There are currently more than 300 attributes
I use dynamic fields to import the attributes into solr without having
to define a specific fi
Martin Grotzke schrieb:
On Tue, 2007-06-19 at 19:16 +0200, Thomas Traeger wrote:
Hi,
I'm also just at that point where I think I need a wildcard facet.field
parameter (or someone points out another solution for my problem...).
Here is my situation:
I have many products of diff
Chris Hostetter schrieb:
: to make it clear, i agree that it doesn't make sense faceting on all
: available fields, I only want faceting on those 300 attributes that are
: stored together with the fields for full text searches. A
: product/document has typically only 5-10 attributes.
:
: I like t
: Faceting on manufacturers and categories first and than present the
: corresponding facets might be used under some circumstances, but in my case
: the category structure is quite deep, detailed and complex. So when
: the user enters a query I like to say to him "Look, here are the
: manufactu
Hi,
We have the text field below configured on fields that are both stored and
indexed. It seems to me that applying the same filters on both index and query
would be redundant, and perhaps a waste of processing on the retrieval side if
the filter work was already done on the index side. Is thi
stopped filtering on insert for those and switched to filtering on query
based on recommendations from the Solr Doc.
Thanks,
TZ
On 8/15/18, 3:17 PM, "Andrea Gazzarini" wrote:
>Hi Thomas,
>as you know, the two analyzers play in a different moment, with a
>different input and a
Hi,
I’m trying to track down an odd issue I’m seeing when using the
SolrEntityProcessor to seed some test data from a solr 4.x cluster to a solr
7.x cluster. It seems like strings are being interpreted as multivalued when
passed from a string field to a text field via the copyTo directive. Any
7, 2018, 11:50 PM Shawn Heisey, wrote:
>
>> On 8/17/2018 6:15 PM, Zimmermann, Thomas wrote:
>> > I¹m trying to track down an odd issue I¹m seeing when using the
>> SolrEntityProcessor to seed some test data from a solr 4.x cluster to a
>> solr 7.x cluster. It seems li
Hi,
We have a custom java plugin that leverages the UpdateRequestProcessorFactory
to push data to multiple cores when a single core is written to. We are
building the plugin with maven, deploying it to /solr/lib and sourcing the jar
via a lib directive in our solr config. It currently works cor
In case anyone else runs into this, I tracked it down. I had to force maven to
explicitly include all of it’s dependent jars in the plugin jar using the
assembly plugin in the pom like so:
maven-assembly-plugin
2.5.3
jar-with-dependencies
We have a Solr v7 Instance sourcing data from a Data Import Handler with a Solr
data source running Solr v4. When it hits a single server in that instance
directly, all documents are read and written correctly to the v7. When we hit
the load balancer DNS entry, the resulting data import handler
Question about CloudSolrClient and CLUSTERSTATUS. We just deployed a 3 server
ZK cluster and a 5 node solr cluster using the CloudSolrClient in Solr 7.4.
We're seeing a TON of traffic going to one server with just cluster status
commands. Every single query seems to be hitting this box for statu
On 11/6/18, 11:39 AM, "Shawn Heisey" wrote:
>On 11/6/2018 9:06 AM, Zimmermann, Thomas wrote:
>> For example - 75k request per minute going to this one box, and 3.5k
>>RPM to all other nodes in the cloud.
>>
>> All of those extra requests on the one box are
I should mention I¹m also hanging out in the Solr IRC Channel today under
the nick ³apatheticnow² if anyone wants to follow up in real time during
business hours EST.
On 11/6/18, 11:39 AM, "Shawn Heisey" wrote:
>On 11/6/2018 9:06 AM, Zimmermann, Thomas wrote:
>> For exampl
this, we could of course give it
a quick go.
-TZ
On 11/6/18, 12:35 PM, "Shawn Heisey" wrote:
>On 11/6/2018 10:12 AM, Zimmermann, Thomas wrote:
>> Shawn -
>>
>> Server performance is fine and request time are great. We are tolerating
>> the level of traffic,
nd mention of a somewhat similar situation with BooleanQuery, which was
considered a bug and fixed in 2016:
https://issues.apache.org/jira/browse/LUCENE-7132
So my questions are:
1. Is there something wrong in my query that prevents the “Netzteil”-only
product to get a score of 2.0?
2. Shouldn’t the score in the result and the explain section always be the same?
Best regards,
Thomas
ilter.
The WhitespaceTokenizerFactory ensures that you can define synonyms with
hyphens like mac-book -> macbook.
Best regards, Thomas.
On 05.01.19, 02:11, "Wei" wrote:
Hello,
We are upgrading to Solr 7.6.0 and noticed that SynonymFilter and
WordDelimiterFilter ha
On 04.01.19, 09:11, "Thomas Aglassinger" wrote:
> When debugging a query using multiplicative boost based on the product()
> function I noticed that the score computed in the explain section is correct
> while the score in the actual result is wrong.
We digged into th
dear community,
Is it possible to index documents (e.g. pdf, word,...) for fulltextsearch
without storing their content(payload) inside Solr server?
Thanking you in advance for your help
BR
Tom
dear community,
I would like to automatically add a sha256 filehash to a Document field
after a binary file is posted to a ExtractingRequestHandler.
First i thought, that the ExtractingRequestHandler has such a feature, but
so far i did not find a configuration.
It was mentioned that I should impl
g Support Training - http://sematext.com/
>
>
>
> > On 23 May 2018, at 06:46, Thomas Lustig wrote:
> >
> > dear community,
> >
> > Is it possible to index documents (e.g. pdf, word,...) for
> fulltextsearch
> > without storing their content(payload) inside Solr server?
> >
> > Thanking you in advance for your help
> >
> > BR
> >
> > Tom
>
>
I configured a DataImportHandler using a FileListEntityProcessor to import
files from a folder.
This setup works really great, but i do not now how i should handle changes
on the filesystem (e.g. files added, deleted,...)
Should I always do a "full-import"? As far as i read "delta-import" is only
s
Hi,
I was wondering if there was a reason Solr 7.4 is still recommending ZK 3.4.11
as the major version in the official changelog vs shipping with 3.4.12 despite
the known regression in 3.4.11. Are there any known issues with running 7.4
alongside ZK 3.4.12. We are beginning a major Solr upgrad
M, Zimmermann, Thomas wrote:
>> I was wondering if there was a reason Solr 7.4 is still recommending ZK
>>3.4.11 as the major version in the official changelog vs shipping with
>>3.4.12 despite the known regression in 3.4.11. Are there any known
>>issues with running 7.4 alongsid
Hi,
We're transitioning from Solr 4.10 to 7.x and working through our options
around managing our schemas. Currently we manage our schema files in a git
repository, make changes to the xml files, and then push them out to our
zookeeper cluster via the zkcli and the upconfig command like:
/apps
Hi,
We have several cores with identical configurations with the sole exception
being the language of their document sets. I'd like to leverage Config Sets to
manage the going forward, but ran into two issues I'm struggling to solve
conceptually.
Sample Cores:
our_documents
our_documents_de
ou
Thanks all! I think we will maintain our current approach of hand editing
the configs in git and implement something at the shell level to automate
the process of running upconfig and performing a core reload.
. Memory consumption is
out-of-roof. Where previously 512MB heap was enough, now 6G aren’t enough to
index all files.
kind regards,
Thomas
> Am 04.07.2018 um 15:03 schrieb Markus Jelsma :
>
> Hello Andrey,
>
> I didn't think of that! I will try it when i have the courage aga
Hi,
We're in the midst of our first major Solr upgrade in years and are trying to
run some cleanup across all of our client codebases. We're currently using the
standard PHP Solr Extension when communicating with our cluster from our
Wordpress installs. http://php.net/manual/en/book.solr.php
F
if anybody else experienced the same
problems?
kind regards,
Thomas
signature.asc
Description: Message signed with OpenPGP
Hi,
SOLR is shipping with a script that handles OOM errors. And produces log files
for every case with content like this:
Running OOM killer script for process 9015 for Solr on port 28080
Killed process 9015
This script works ;-)
kind regards
Thomas
> Am 02.08.2018 um 12:28 schr
have several setups that triggers this reliably but
there is no simple test case that „fails“ if Tika 1.17 or 1.18 is used. I also
do not know if the error is inside Tika or inside the glue code that makes Tika
usable in SOLR.
Should I file an issue for this?
kind regards,
Thomas
>
t;, "country": {"set": "country2"' &&
curl "$URL/select?q=id:TEST_ID1"
"response":{"numFound":1,"start":0,"docs":[
{
"id":"TEST_ID1",
"description":["desc
Hi
I've run into an issue with creating a Managed Stopwords list that has the
same name as a previously deleted list. Going through the same flow with
Managed Synonyms doesn't result in this unexpected behaviour. Am I missing
something or did I discover a bug in Solr?
On a newly started solr with
Hi Folks –
Few questions before I tackled an upgrade here. Looking to go from 7.4 to 7.7.2
to take advantage of the improved Tiered Merge Policy and segment cleanup – we
are dealing with some high (45%) deleted doc counts in a few cores. Would
simply upgrading Solr and setting the cores to use
Thanks so much Erick. Sounds like this should be a perfect approach to helping
resolve our current issue.
On 2/24/20, 6:48 PM, "Erick Erickson" wrote:
Thomas:
Yes, upgrading to 7.5+ will automagically take advantage of the
improvements, eventually... No, you don’t have
j/
kind regards
Thomas
olr to
copy the configset instead of using it directly but maybe I missed it.
Is this use case not possible with Solr Standalone or did I missed
something obvious ? My version is too old and there is something in
more recent version ?
Thanks,
--
Thomas Mortagne
cat:{"add-distinct":["a","b","d"]}}]'
{
"responseHeader":{
"rf":2,
"status":0,
"QTime":81}}
$ curl '
http://localhost:8983/solr/techproducts/select?q=id%3A123&omitHeader=true'
{
"response":{"numFound":1,"start":0,"docs":[
{
"id":"123",
"cat":["a",
"b",
"c",
"a",
"b",
"d"],
"_version_":1668919799351083008}]
}}
Is this a known issue or am I missing something here?
Kind regards
Thomas Corthals
ON only allows UTF-8, UTF-16 or UTF-32.
Best,
Thomas
Op di 9 jun. 2020 07:11 schreef Hup Chen :
> Any idea?
> I still won't be able to get TolerantUpdateProcessorFactory working, solr
> exited at any error without any tolerance, any suggestions will be
> appreciated.
> curl &quo
quot;cord" (LD: 1, freq:
1) , "card" (LD: 2, freq: 4). In standalone mode, I get "corp", "cord",
"card" with extendedResults true or false.
The results are the same for the /spell and /browse request handlers in
that configset. I've put all combinations side by side in this spreadsheet:
https://docs.google.com/spreadsheets/d/1ym44TlbomXMCeoYpi_eOBmv6-mZHCZ0nhsVDB_dDavM/edit?usp=sharing
Is it something in the configuration? Or a bug?
Thomas
Can anybody shed some light on this? If not, I'm going to report it as a
bug in JIRA.
Thomas
Op za 13 jun. 2020 13:37 schreef Thomas Corthals :
> Hi
>
> I'm seeing different ordering on the spellcheck suggestions in cloud mode
> when using spellcheck.ex
Since "overseer" is also problematic, I'd like to propose "orchestrator" as
an alternative.
Thomas
Op vr 19 jun. 2020 04:34 schreef Walter Underwood :
> We don’t get to decide whether “master” is a problem. The rest of the world
> has already decided that it i
ferent for docValues, that's even more reason to state it
clearly in the ref guide to avoid confusion.
Best,
Thomas
Op do 2 jul. 2020 om 20:37 schreef Erick Erickson :
> This is true _unless_ you fetch from docValues. docValues are SORTED_SETs,
> so the results will be both ordered an
Op vr 3 jul. 2020 om 14:11 schreef Bram Van Dam :
> On 03/07/2020 09:50, Thomas Corthals wrote:
> > I think this should go in the ref guide. If your product depends on this
> > behaviour, you want reassurance that it isn't going to change in the next
> > release. No
Hi,
Is it possible to specify a Tokenizer Factory on a Managed Synonym Graph
Filter? I would like to use a Standard Tokenizer or Keyword Tokenizer on
some fields.
Best,
Thomas
. To get fully
correct positional queries when your synonym replacements are multiple
tokens, you should instead apply synonyms using this filter at query time.
Regards,
Thomas
Op do 30 jul. 2020 10:17 schreef Colvin Cowie :
> That does some like an unhelpful example to have, though
>
&
"index":3},
{
"name":"all",
"role":"admin",
"index":4}],
"user-role":{
"solr":"admin",
"user1":"role1",
"user2":"role2"},
"":{"v":0}}}
With this setup, I'm unable to read from any of the cores with either user.
If I "delete-permission":4 both users can read from either core, not just
"their" core.
I have tried custom permissions like this to no avail:
{"name": "access-core1", "collection": "core1", "role": "role1"},
{"name": "access-core2", "collection": "core2", "role": "role2"},
{"name": "all", "role": "admin"}
Is it possible to do this for cores? Or am I out of luck because I'm not
using collections?
Regards
Thomas
Hi Steve
I have a real-world use case. We don't apply a synonym filter at index
time, but we do apply a managed synonym filter at query time. This allows
content managers to add new synonyms (or remove existing ones) "on the fly"
without having to reindex any documents.
Thoma
might cause
this issue? Or is there a problem with Solr and Windows 10 or Amazon
Corretto?
As I already said, the procedure described above worked well for the
Solr versions since Solr 6.6.1, without java.lang.OutOfMemoryError after
creating the collection.
Best regards,
Thomas Heldmann
as indeed "techproducts", so it
is
the collection name that has changed.
It is just me doing something wrong? It is hard to believe a such obvious
error
has not been corrected yet? It seems the 7.1 tutorial has the same error.
/Thomas Egense
Thank you,
I will fix the image to have the correct collection name. It was confusing
showing a different collection image overview
that the one you see when following the tutorial.
/Thomas
On Thu, Jun 27, 2019 at 3:45 PM Alexandre Rafalovitch
wrote:
> Actually, the tutorial does say &quo
ctually exists.
$ curl -i "
http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/foobar";
| head -n 1
HTTP/1.1 404 Not Found
$ curl -I "
http://localhost:8983/solr/techproducts/schema/analysis/synonyms/english/foobar";
| head -n 1
HTTP/1.1 200 OK
I presume that's a bug?
Thomas
You can send big queries as a POST request instead of a GET request.
Op do 18 feb. 2021 om 11:38 schreef Anuj Bhargava :
> Solr 8.0 query length limit
>
> We are having an issue where queries are too big, we get no result. And if
> we remove a few keywords we get the result.
>
> Error we get - er
Hey,
I'm playing around with the suggester component, and it works perfectly
as described: Suggestions for 'logitech mouse' include 'logitech mouse
g500' and 'logitech mouse gaming'.
However, when the words in the record supplying the suggester do not
follow each other as in the search terms, no
I have Solr as the backend to an ECommerce solution where the fields can
be configured to be searchable, which generates a schema.xml and loads
it into Solr.
Now we also allow to configure Solr search weight per field to affect
queries, so my queries usually look something like this:
spellch
Hey,
in german, you can string most nouns together by using hyphens, like
this:
Industrie = industry
Anhänger = trailer
Industrie-Anhänger = trailer for industrial use
Here [1], you can see me querying "Industrieanhänger" from the "name"
field (name:Industrieanhänger), to make sure the index a
used the analysis tab in the admin UI? You can type in
sentences for both index and query time and see how they would be
analysed by various fields/field types.
Once you have got index time and query time to result in the same tokens
at the end of the analysis chain, you should start seeing matches in
A friend and I are trying to develop some software using Solr in the
background, and with that comes alot of changes. We're used to older
versions (4.3 and below). We especially have problems with the
autosuggest feature.
This is the field definition (schema.xml) for our autosuggest field:
.
God damn. Thank you.
*ashamed*
Am 30.06.2015 00:21 schrieb Erick Erickson:
> Try not putting it in double quotes?
>
> Best,
> Erick
>
> On Mon, Jun 29, 2015 at 12:22 PM, Thomas Michael Engelke
> wrote:
>
>> A friend and I are trying to develop s
Hey,
we have multiple documents that are matches for the query in question
("name:hubwagen"). Thing is, some of the documents only contain the
query, while others match 100% in the "name" field:
Hochhubwagen
5.9861565
Hubwagen
5.9861565
The debug looks like this (for the first and 5th
Hello Team,
I have few experiences where restart of a solr node is the only option when
a core goes down. I am trying to automate the restart of a solr server when
a core goes down or the replica is unresponsive over a period of time.
I have a script to check if the cores/ replicas associated wit
Hi,
I am new to Solr. I have a use case to add a new node when an existing node
goes down. The new node with a new IP should contain all the replicas that
the previous node had. So I am using a network storage (cinder storage) in
which the data directory (where the solr.xml and the core directorie
Hi,
an example. We have 2 records with this data in the same field
(description):
1: Lufthutze vor Kühler Bj 62-65, DS
2: Kühler HY im
Austausch, Altteilpfand 250 Euro
A search with the parameters
'description:Kühler' does provide this debug:
2.3234584 = (MATCH)
weight(description:kühler in 40
rmFreq=1.0
6.226491 = idf(docFreq=64, maxDocs=12099)
0.375 =
fieldNorm(doc=5754)
Am I using this feature wrong?
Am
30.07.2014 14:48 schrieb Ahmet Arslan:
> Hi,
>
> Please see :
https://issues.apache.org/jira/browse/SOLR-3925 [1]
>
> Ahmet
>
> On
Wednesday, July 30, 2014 2:39 PM
Hello everybody,
we have a legacy solr installation in version 3.6.0.1. One of the indices
defines a field named "content" as a fulltext field where a product
description will reside. One of the records indexed contains the following
data (excerpt):
z. B. in der Serie 26KA.
I had the problem tha
you for taking a look.
2014-01-29 Jack Krupansky
> What field type and analyzer/tokenizer are you using?
>
> -- Jack Krupansky
>
> -Original Message----- From: Thomas Michael Engelke Sent: Wednesday,
> January
There may have been some subtle word delimiter
> filter changes between 3.x and 4.x.
>
> Read:
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201202.mbox/%
> 3CC0551C512C863540BC59694A118452AA0764A434@ITS-EMBX-03.
> adsroot.itcs.umich.edu%3E
>
>
>
> -Original M
railing dot) to be in the index.
>
> It seems counter-intuitive, but the attributes of the index and query word
> delimiter filters need to be slightly asymmetric.
>
>
> -- Jack Krupansky
>
> -Original Message- From: Thomas Michael Engelke
> Sent: Thursday, Janua
I'm in the process of incorporating Solr spellchecking in our product.
For that, I've created a new field:
And in the
fieldType definitions:
Then I feed the names of products into the corresponding
core. They can have a lot of words (examples):
door lock rear left
Door brake,
Hi,
I'm experimenting with the Spellcheck component and have therefor
used the example configuration for spell checking to try things out. My
solrconfig.xml looks like this:
spell
default
spell
solr.DirectSolrSpellChecker
internal
wordbreak
solr.WordBreakSolrSpellChecker
ot;Sichtscheiben",
"spell": "Sichtscheiben"
},
{
"name":
"Sichtscheiben",
"spell": "Sichtscheiben"
},
{
"name":
"Sichtscheiben",
"spell": "Sichtscheiben"
}
]
}
}
Multiple records
exist
I have a problem with a stemmed german field. The field definition:
stored="true" required="false" multiValued="false"/>
...
positionIncrementGap="100" autoGeneratePhraseQueries="true">
words="stopwords.txt"/>
generateWordParts="1" generateNumberParts="1" catenateWords="1"
catenateN
afalovitch:
On 7 October 2014 08:25, Thomas Michael Engelke
wrote:
So the culprit is the asterisk at the end. As far as we can read from
the docs, an asterisk is just 0 or more characters, which means that
the literal word in front of the asterisk should match the query.
Not quite: http://wiki.
We've moved from an asterisk based autosuggest functionality
("searchterm*") to a version using a special field called autosuggest,
filled via copyField directives. The field definition:
positionIncrementGap="100">
class=
We're using Solr as a backend for an ECommerce site/system. The Solr
index stores products with selected attributes, as well as a dedicated
field for autocomplete suggestions (Done via AJAX request when typing in
the search box without pressing return).
The autosuggest field is supplied by cop
ces only tokens that are in the main index. I think this is basically
> how all the Suggester implementations are designed to work already; are you
> using one of those, or are you using the TermsComponent, or something else?
>
> -Mike
>
> On 11/10/14 2:54 AM, Thomas Mi
I'm toying around with the suggester component, like described here:
http://www.andornot.com/blog/post/Advanced-autocomplete-with-Solr-Ngrams-and-Twitters-typeaheadjs.aspx
So I made 4 fields:
stored="true" multiValued="true" />
stored="true" multiValued="true" />
indexed="true" stored=
2014 08:52 schrieb Thomas Michael Engelke:
> I'm toying around with the suggester component, like described here:
> http://www.andornot.com/blog/post/Advanced-autocomplete-with-Solr-Ngrams-and-Twitters-typeaheadjs.aspx
> [1]
>
> So I made 4 fields:
>
> multiValued="
Like in this article
(http://www.andornot.com/blog/post/Advanced-autocomplete-with-Solr-Ngrams-and-Twitters-typeaheadjs.aspx),
I am using multiple fields to generate different options for an
autosuggest functionality:
- First, the whole field (top priority)
- Then, the whole field as EdgeNGram
I believe I have encountered a bug in SOLR. I have a data type defined as
follows:
I have not been able to reproduce this problem for smaller numbers, but for
some of the very large numbers, the value that gets stored for this “aid” field
is not the same as the number that gets indexed. For e
I was using the SOLR administrative interface to issue my queries. When I
bypass the administrative interface and go directly to SOLR, the JSON return
indicates the AID is as it should be. The issue is in the presentation layer of
the Solr Admin UI. Which is good news.
Thanks all, my bad. Shoul
Hi Plamen
You should set expand to true during
...
Greetings,
Thomas
Am 29.03.2013 17:16, schrieb Plamen Mihaylov:
> Hey guys,
>
> I have the following problem - I have a website with sport players, where
> using Solr indexing their data. I have defined synonyms like:
Hello Soir,
Soir looks like an excellent API and its nice to have a tutorial that makes it
easy to discover the basics of what Soir does, I'm impressed. I can see plenty
of potential uses of Soir/Lucene and I'm interested now in just how real-time
the queries made to an index can be?
For examp
>> I work at MarkLogic, and we have a real-time transactional search engine
>> (and respository). If you are curious, contact me directly.
>>
>> I do like Solr for lots of applications -- I chose it when I was at Netflix.
>>
>> wunder
>>
>> On May 20, 2
al-time
>>> requirements, you may want to review this in the first instance, if
>>> Lucandra is of interest.
>>>
>>> On 21 May 2010, at 06:12, Walter Underwood wrote:
>>>
>>>
>>>> Solr is a very good engine, but it is not real-
u describe still looks feasible to me
> in Solr, pending the questions above (and some followups).
>
>
> On May 21, 2010, at 4:05 AM, Thomas J. Buhr wrote:
>
>> Thanks for the new information. Its really great to see so many options for
>> Lucene.
>>
>> In my sc
What about my situation?
My renderers need to query the index for fast access to layout and style info
as I already described about 3 messages ago on this thread. Another scenario is
having automatic queries triggered as my midi player iterates through the
model. As the player encounters trigg
Solr,
The Solr 1.4 EES book arrived yesterday and I'm very much enjoying it. I was
glad to see that "rich clients" are one case for embedding Solr as this is the
case for my application. Multi Cores will also be important for my RIA.
The book covers a lot and makes it clear that Solr has extens
+1
Good question, my use of Solr would benefit from nested annotated beans as well.
Awaiting the reply,
Thom
On 2010-06-03, at 1:35 PM, Peter Hanning wrote:
>
> When modeling documents with a lot of fields (hundreds) the bean class used
> with SolrJ to interact with the Solr index tends to g
at
org.apache.xmlbeans.impl.store.Locale$SaxLoader.load(Locale.java:3439)
... 48 more
Thank You
___
Thomas Lawless
Software Engineer
Sellers Community Connections
Home Office
[EMAIL PROTECTED]
Voice: (845)402-3007 t/l 268-5515
201 - 300 of 305 matches
Mail list logo