Hi Lance,
thanks for your explanation.
As far as I know in distributed search i have to tell Solr what other shards
it has to query. So, if I want to query a specific core, present in all my
shards, i could tell Solr this by using the shards-param plus specified core
on each shard.
Using SolrCl
By the way: although I am asking for SolrCloud explicitly again, I will take
your advice and try distributed search first to understand the concept
better.
Regards
Em wrote:
>
> Hi Lance,
>
> thanks for your explanation.
>
> As far as I know in distributed search i have to tell Solr what oth
Hi Alexander,
thank you for your response.
You said that the old index files were still in use. That means Linux does
not *really* delete them until Solr frees its locks from it, which happens
while reloading?
Thank you for sharing your experiences!
Kind regards,
Em
Alexander Kanarsky wr
Have you tried lucene-hunspell plugin. Haven't tested it, but seems
promising if it works in 1.4.1.
http://rcmuir.wordpress.com/2010/03/02/minority-language-support-for-lucene-and-solr/
Matti
2011/1/21 Laura Virtala :
> On 01/21/2011 11:26 AM, Laura Virtala wrote:
>>
>> Hello,
>>
>> I cannot fin
Hi
I would like to restrict access to /update/csv request handler
Is there a ready to use UpdateRequestProcessor for that ?
My first idea was to heritate from CSVRequestHandler and to overload
public void handleRequest(SolrQueryRequest req, SolrQueryResponse rsp) {
...
restrict by IP code
No. SolrQueryRequest doesn't (currently) have access to the actual HTTP
request coming in. You'll need to do this either with a servlet filter and
register it into web.xml or restrict it from some other external firewall'ish
technology.
Erik
On Jan 23, 2011, at 13:21 , Teebo wrote:
Most times people do this by running solr ONLY local host, and running some
kind
of permission scheme through a server site application.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from other
Is it possible to use ONE definition of a dynamic field type for inserting
mulitple dynamic fields of that type with different names? Or do I need a
seperate dynamic field definition for each eventual field?
Can I do this?
.
.
and then doing for insert
all t
Yep you can. Although I'm not sure you can use a wildcard-prefix. (perhaps
you can I'm just not sure) . I always use wildcard-suffixes.
Cheers,
Geert-Jan
2011/1/23 Dennis Gearon
> Is it possible to use ONE definition of a dynamic field type for inserting
> mulitple dynamic fields of that type w
Hi all,
I wasted the last few hours trying to serialize some column values (from
mysql) into a Solr column, but I just can't find such a function. I'll use
the value in PHP - I don't know if it is possible to serialize in PHP style
at all. This is what I tried and works with a given factor:
Depends on your process chain to the eventual viewer/consumer of the data.
The questions to ask are:
A/ Is the data IN Solr going to be viewed or processed in its original form:
-->set stored = 'true'
--->no serialization needed.
B/ If it's going to be anayzed and searched for separ
My favorite "other external firewall'ish technology" is just an apache
front-end reverse proxying to the Java servlet (such as Solr), with access
controls in apache.
I haven't actually done it with Solr myself though, my Solr is behind a
firewall accessed by trusted apps only. Be careful maki
All,
I am having problems building Solr trunk on my windows 7 machine. I
get the following errors...
BUILD FAILED
C:\Apache\Solr-Nightly\build.xml:23: The following error occurred while executin
g this line:
C:\Apache\Solr-Nightly\lucene\common-build.xml:529:
The following error occurred while ex
+1 on Nutch!
On Fri, Jan 21, 2011 at 4:11 PM, Markus Jelsma
wrote:
> Hi,
>
> Please take a look at Apache Nutch. I can crawl through a file system over
> FTP.
> After crawling, it can use Tika to extract the content from your PDF files and
> other. Finally you can then send the data to your Solr
I think I just ran into the same thing, see: SOLR-2303.
The short form is it's some wonky pathing issues. I faked a fix,
but it appears more complex than my simple fix would handle,
so I have to drop it for a while.
Best
Erick
On Sun, Jan 23, 2011 at 9:31 PM, Adam Estrada wrote:
> All,
>
> I a
So I did manage to get this to build...
ant compile does it.
Didn't it used to use straight Maven? It's pretty hard to keep track of what's
what...Anyway, is there any way/reason all the cool Lucene jars aren't getting
copied in to $SOLR_HOME/lib? That would really help and save a lot of time.
On Mon, Jan 24, 2011 at 8:15 AM, Adam Estrada wrote:
> +1 on Nutch!
[...]
Would it be possible for Markus, and you to clarify on
what the advantages of Nutch are in crawling a
well-defined filesystem hierarchy? A simple shell script
that POSTs to Solr works fine for this, so why would
one choose
i tried editing the schema file and indexing my own log.. the error that i
got is
root@karunya-desktop:/home/karunya/apache-solr-1.4.1/example/exampledocs#
java -jar post.jar sample.txt
SimplePostTool: version 1.2
SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8,
other
On Mon, Jan 24, 2011 at 10:47 AM, Dinesh wrote:
>
> i tried editing the schema file and indexing my own log.. the error that i
> got is
>
> root@karunya-desktop:/home/karunya/apache-solr-1.4.1/example/exampledocs#
> java -jar post.jar sample.txt
> SimplePostTool: version 1.2
> SimplePostTool: WAR
I'd be happy to comment:
A simple shell script doesn't provide URL filtering and control of how you
crawl those documents on the local file system. Nutch has several levels of URL
filtering based on regex, MIME type, and others. Also, if there are any
outlinks in those local files that point to
On Mon, Jan 24, 2011 at 11:07 AM, Mattmann, Chris A (388J)
wrote:
> I'd be happy to comment:
>
> A simple shell script doesn't provide URL filtering and control of how you
> crawl those documents on the local file system. Nutch has several levels of
> URL filtering based on regex, MIME type, and
i tried those examples.. is it compuslory that i should make it into XML, how
does it index CSV.. should i post my entire schema that i made it myself and
the text file that i tried to index..
-
DINESHKUMAR . M
I am neither especially clever nor especially gifted. I am only very, very
curious
On Mon, Jan 24, 2011 at 11:18 AM, Dinesh wrote:
>
> i tried those examples.. is it compuslory that i should make it into XML, how
> does it index CSV..
You will have to convert either into XML, or CSV, but neither of those should
be too difficult.
> should i post my e
I have a group of subindex, each of which is a core in my solr now. I want
to make one query for some of them, how can I do that? And classify response
doc by index, using facet search?
Thanks
Kun
i did all the configurations correctly.. previously i missed a configuration
file after adding it i'm getting a new error called
Unknown FieldType: 'string' used in
QueryElevationComponent
i found it was defined in solrconfig.xml
i didn't change any of the line in that but i don't know why am i
On Mon, Jan 24, 2011 at 11:54 AM, Dinesh wrote:
>
> i did all the configurations correctly.. previously i missed a configuration
> file
Sorry, what are you trying to configure now? The built-in Solr example,
or the setup for your log files? Did you get the built-in Solr example to
work?
How were
Is there a difference between sending optimize=true with
the full-import command or sending optimize=true as
a separate command after finishing full-import?
Regards,
Bernd
Am 23.01.2011 02:18, schrieb Espen Amble Kolstad:
> Your not doing optimize, I think optimize would delete your old index.
I think optimize only ever gets done when either a full-import or
delta-import is done. You could optimize the "normal" way though see:
http://wiki.apache.org/solr/UpdateXmlMessages
- Espen
On Mon, Jan 24, 2011 at 8:05 AM, Bernd Fehling
wrote:
>
> Is there a difference between sending optimize=t
I sent commit=true&optimize=true as a separate command but nothing
happened. Will try with additional options
waitFlush=false&waitSearcher=false&expungeDeletes=true
I wonder why the DIH admin GUI (debug.jsp) is not sending optimize=true
together with full-import ?
Regards,
Bernd
Am 24.01.2011
Could you please give a pointer to the SolrCloud architecture?
Could you please give a comprehensive explanation between it and Katta?
* targetted app difference?
* scalability difference?
* flexibility difference and so on
Thanks,
Sean
On Wed, Jan 19, 2011 at 12:07 PM, Mark Mille
30 matches
Mail list logo