I have absolutely no idea when it comes to Drupal, the Drupal folks would be
much better equipped to answer.
Best,
Erick
> On Feb 21, 2019, at 8:16 AM, Greg Robinson wrote:
>
> Thanks for the feedback.
>
> So here is where I'm at.
>
> I first went ahead and deleted the existing core that was
Thanks for the feedback.
So here is where I'm at.
I first went ahead and deleted the existing core that was returning the
error using the following command: bin/solr delete -c new_solr_core
Now when I access the admin panel, there are no errors.
I then referred to the large "warning" box on the
On 2/20/2019 11:07 AM, Greg Robinson wrote:
Lets try this: https://imgur.com/a/z5OzbLW
What I'm trying to do seems pretty straightforward:
1. Install Solr Server 7.4 on Linux (Completed)
2. Connect my Drupal 7 site to the Solr Server and use it for indexing
content
My understanding is that I m
Gotcha.
Lets try this: https://imgur.com/a/z5OzbLW
What I'm trying to do seems pretty straightforward:
1. Install Solr Server 7.4 on Linux (Completed)
2. Connect my Drupal 7 site to the Solr Server and use it for indexing
content
My understanding is that I must first create a core in order to c
Attachments generally are stripped by the mail server.
Are you trying to create a core as part of a SolrCloud _collection_? If so, this
is an anti-pattern, use the collection API commands. Shot in the dark.
Best,
Erick
> On Feb 19, 2019, at 3:05 PM, Greg Robinson wrote:
>
> I used the front en
I used the front end admin (see attached)
thanks
On Tue, Feb 19, 2019 at 3:54 PM Erick Erickson
wrote:
> Hmmm, that’s not very helpful…..
>
> Don’t quite know what to say. There should be something more helpful
> in the logs.
>
> Hmmm, How did you create the core?
>
> Best,
> Erick
>
>
> > On F
Hmmm, that’s not very helpful…..
Don’t quite know what to say. There should be something more helpful
in the logs.
Hmmm, How did you create the core?
Best,
Erick
> On Feb 19, 2019, at 1:29 PM, Greg Robinson wrote:
>
> Thanks for your direction regarding the log.
>
> I was able to locate it
Thanks for your direction regarding the log.
I was able to locate it and these two lines stood out:
Caused by: org.apache.solr.common.SolrException: Could not load conf for
core new_solr_core: Error loading solr config from
/home/solr/server/solr/new_solr_core/conf/solrconfig.xml
Caused by: org.
do a recursive seach for “solr.log" under SOLR_HOME…….
Best,
ERick
> On Feb 19, 2019, at 8:08 AM, Greg Robinson wrote:
>
> Hi Erick,
>
> Thanks for the quick response.
>
> Here is what is currently contained within the conf dir:
>
> drwxr-xr-x 2 root root 4096 Feb 18 17:51 lang
> -rw-r--r-
Hi Erick,
Thanks for the quick response.
Here is what is currently contained within the conf dir:
drwxr-xr-x 2 root root 4096 Feb 18 17:51 lang
-rw-r--r-- 1 root root 54513 Feb 18 17:51 managed-schema
-rw-r--r-- 1 root root 329 Feb 18 17:51 params.json
-rw-r--r-- 1 root root 894 Feb 18 17:
Are all the other files there in your conf dir? Solrconfig.xml references
things like nanaged-schema etc.
Also, your log file might contain more clues...
On Tue, Feb 19, 2019, 08:03 Greg Robinson Hello,
>
> We have Solr 7.4 up and running on a Linux machine.
>
> I'm just trying to add a new core
Hello,
We have Solr 7.4 up and running on a Linux machine.
I'm just trying to add a new core so that I can eventually point a Drupal
site to the Solr Server for indexing.
When attempting to add a core, I'm getting the following error:
new_solr_core:
org.apache.solr.common.SolrException:org.apac
Hi,
New to solr, so forgive any missing info on my part.
1. I am trying figure out how to get an html document html element
parsed into a solr dynamic field. Is it possible ? So let's say I have some
specific html tag or xml tags within the html
document, that I created a Dynamic field
*Hello*
*The code which worked for me:*
SolrClient client = new HttpSolrClient.Builder("
http://localhost:8983/solr/shakespeare";).build();
SolrQuery query = new SolrQuery();
query.setRequestHandler("/select");
query.setQuery("text_entry:henry");
query.setFields("
On 1/8/2018 10:23 AM, Deepak Goel wrote:
> *I am trying to search for documents in my collection (Shakespeare). The
> code is as follows:*
>
> SolrClient client = new HttpSolrClient.Builder("
> http://localhost:8983/solr/shakespeare";).build();
>
> SolrDocument doc = client.getById("2");
> *However
Got it . Thank You for your help
Deepak
"Please stop cruelty to Animals, help by becoming a Vegan"
+91 73500 12833
deic...@gmail.com
Facebook: https://www.facebook.com/deicool
LinkedIn: www.linkedin.com/in/deicool
"Plant a Tree, Go Green"
On Mon, Jan 8, 2018 at 11:48 PM, Deepak Goel wrote:
*Is this right?*
SolrClient client = new HttpSolrClient.Builder("
http://localhost:8983/solr/shakespeare/select";).build();
SolrQuery query = new SolrQuery();
query.setQuery("henry");
query.setFields("text_entry");
query.setStart(0);
queryResponse = client
I think you are missing /query handler endpoint in the URL. Plus actual
search parameters.
You may try using the admin UI to build your queries first.
Regards,
Alex
On Jan 8, 2018 12:23 PM, "Deepak Goel" wrote:
> Hello
>
> *I am trying to search for documents in my collection (Shakespeare)
Hello
*I am trying to search for documents in my collection (Shakespeare). The
code is as follows:*
SolrClient client = new HttpSolrClient.Builder("
http://localhost:8983/solr/shakespeare";).build();
SolrDocument doc = client.getById("2");
*However this does not return any document. What mistake
Hold it. "date", "tdate", "pdate" _are_ primitive types. Under the
covers date/tdate are just a tlong type, newer Solrs have a "pdate"
which is a point numeric type. All that these types do is some parsing
up front so you can send human-readable data (and get it back). But
under the covers it's sti
While you're generally right, in this case it might make sense to stick
to a primitive type.
I see "unixtime" as a technical information, probably from
System.currentTimeMillis(). As long as it's not used as a "real world"
date but only for sorting based on latest updates, or chosing which
documen
There was time ago a Solr installation which had the same problem, and the
author explained me that the choice was made for performance reasons.
Apparently he was sure that handling everything as primitive types would
give a boost to the Solr searching/faceting performance.
I never agreed ( and one
What Hoss said, and in addition somewhere some
custom code has to be translating things back and
forth. For dates, Solr wants -MM-DDTHH:MM:SSZ
as a date string it knows how to deal with. That simply
couldn't parse as a float type so there's some custom
code that transforms dates into a float at
: Here is my question. In schema.xml, there is this field:
:
:
:
: Question: why is this declared as a float datatype? I'm just looking
: for an explanation of what is there – any changes come later, after I
: understand things better.
You would hvae to ask the creator of that sch
I have inherited a working SOLR installation, that has not been upgraded since
solr 4.0. My task is to bring it forward (at least 6.x, maybe 7.x). I am
brand new to SOLR.
Here is my question. In schema.xml, there is this field:
Question: why is this declared as a float datatype?
First use PatternReplaceCharFilterFactory. The difference is that
PatternReplaceCharFilterFactoryworks on the entire input whereas
PatternReplaceFilterFactory works only on the tokens emitted by the
tokenizer. Concrete example using WhitespeceTokenizerFactory would be
this [is some ] text
PatternRe
I am sure this is very simple but I cannot get the pattern right.
How can I use solr.PatternReplaceFilterFactory to remove all words in brackets
from being indexed?
eg [ignore this]
thanks
Michael
e
> >> >>
> >> >> $ solrctl --zk host:2181/solr --solr host:8983/solr/ collection
> >> --create
> >> >> Catalog_search_index -s 10 -c Catalog_search_index
> >> >>
> >> >> Without the extra ". Catalog_sea
r --solr host:8983/solr/ collection
> >> --create
> >> >> Catalog_search_index -s 10 -c Catalog_search_index
> >> >>
> >> >> Without the extra ". Catalog_search_index" at the end. Also, because
> >> your
> >> >> n
h_index" at the end. Also, because
>> your
>> >> new collection's name is the same as the instanceDir's, you could just
>> omit
>> >> that parameter and it should work ok.
>> >>
>> >> Try that and see if it works.
>> >>
collection's name is the same as the instanceDir's, you could just
> omit
> >> that parameter and it should work ok.
> >>
> >> Try that and see if it works.
> >>
> >> Good luck,
> >> Gonzalo
> >>
> >> -Original Mess
d work ok.
>>
>> Try that and see if it works.
>>
>> Good luck,
>> Gonzalo
>>
>> -Original Message-
>> From: Darshan Pandya [mailto:darshanpan...@gmail.com]
>> Sent: Wednesday, September 7, 2016 12:02 PM
>> To: solr-user@luc
ail.com]
> Sent: Wednesday, September 7, 2016 12:02 PM
> To: solr-user@lucene.apache.org
> Subject: newbie question
>
> hello,
>
> I am using solr cloud with cloudera. When I try to create a collection, it
> fails with the following error.
> Any hints / answers will be
ent: Wednesday, September 7, 2016 12:02 PM
To: solr-user@lucene.apache.org
Subject: newbie question
hello,
I am using solr cloud with cloudera. When I try to create a collection, it
fails with the following error.
Any hints / answers will be helpful.
$ solrctl --zk host:218
hello,
I am using solr cloud with cloudera. When I try to create a collection, it
fails with the following error.
Any hints / answers will be helpful.
$ solrctl --zk host:2181/solr instancedir --list
Catalog_search_index
$ solrctl --zk shot:2181/solr --solr host:8983/solr/ collection --creat
To pile on to Chris' comment. In the M/S situation
you describe, all the query traffic goes to the slave.
True, this relieves the slave from doing the work of
indexing, but it _also_ prevents the master from
answering queries. So going to SolrCloud trades
off indexing on _both_ machines to also qu
: The database of server 2 is considered the "master" and it is replicated
: regularly to server 1, the "slave".
:
: The advantage is the responsiveness of server 1 is not impacted with server
: 2 gets busy with lots of indexing.
:
: QUESTION: When deploying a SOLR 5 setup, do I set things up th
: I can see there is something called a "core" ... it appears there can be
: many cores for a single SOLR server.
:
: Can someone "explain like I'm five" -- what is a core?
https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml
"In Solr, the term core is used to refer to a sin
Trying to learn about SOLR.
I can see there is something called a "core" ... it appears there can be
many cores for a single SOLR server.
Can someone "explain like I'm five" -- what is a core?
And how do "cores" differ from 3.x to 5.x.
Any pointers in the right direction are helpful!
Thanks!
R
Hi,
In my SOLR 3 deployment (inherited it), I have (1) one SOLR server that is
used by my web application, and (2) a second SOLR server that is used to
index documents via a customer datasource.
The database of server 2 is considered the "master" and it is replicated
regularly to server 1, the "s
On 3/19/2014 4:55 AM, Colin R wrote:
My question is an architecture one.
These photos are currently indexed and searched in three ways.
1: The 14M pictures from above are split into a few hundred indexes that
feed a single website. This means index sizes of between 100 and 500,000
entries each.
fig is lots of indexes with merges into the larger ones.
>
> They are still running very fast but indexing is causing us issues.
>
> Thanks
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Newbie-Question-Master-Index-or-100s-Small-Index-tp4125407p4125447.html
> Sent from the Solr - User mailing list archive at Nabble.com.
of indexes with merges into the larger ones.
They are still running very fast but indexing is causing us issues.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-Question-Master-Index-or-100s-Small-Index-tp4125407p4125447.html
Sent from the Solr - User mailing
On Wed, 2014-03-19 at 13:28 +0100, Colin R wrote:
> My question is really regarding index architecture. One big or many small
> (with merged big ones)
One difference is that having a single index/collection gives you better
ranked searches within each collection. If you only use date/filename
sort
them).
In terms of bytes, each photo has a up to 1.5KB of data.
Special requirements are search by date range, text, date range and text.
Plus some boolean filtering. All results can be sorted by date or filename.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-Question
On Wed, 2014-03-19 at 11:55 +0100, Colin R wrote:
> We run a central database of 14M (and growing) photos with dates, captions,
> keywords, etc.
>
> We currently upgrading from old Lucene Servers to latest Solr running with a
> couple of dedicated servers (6 core, 36GB, 500SSD). Planning on usin
changes a day PLUS very busy search servers.
Thanks
Col
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-Question-Master-Index-or-100s-Small-Index-tp4125407.html
Sent from the Solr - User mailing list archive at Nabble.com.
A follow up question on this (as it is kind of new functionality).
What happens if several documents are submitted and one of them fails
due to that? Do they get rolled back or only one?
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexand
: How do I achieve, add if not there, fail if duplicate is found. I though
You can use the optimistic concurrency features to do this, by including a
_version_=-1 field value in the document.
this will instruct solr that the update should only be processed if the
document does not already exis
found. I though
that "overwriteDupes"=false would do that.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-question-on-Deduplication-overWriteDupes-flag-tp4115212.html
Sent from the Solr - User mailing list archive at Nabble.com.
Excellent, works like a charm!
Though embarassing, it's still a good thing the only problem was me being
blind :-)
Thank you, Toke and Erik.
On Wed, Feb 20, 2013 at 11:47 AM, Toke Eskildsen
wrote:
> On Wed, 2013-02-20 at 10:06 +0100, Erik Dybdahl wrote:
> > However, after definining
> >
On Wed, 2013-02-20 at 10:06 +0100, Erik Dybdahl wrote:
> However, after definining
> stored="true" multiValued="true"/>
Seems like a typo to me: You need to write "
You need to use not , that's all :)
Erik
On Feb 20, 2013, at 4:06, Erik Dybdahl wrote:
> Hi,
> I'm currently assessing lucene/solr as a search front end for documents
> currently stored in an rdbms.
> The data has been made searchable to clients, in a way so that each
> client/customer ma
]
Gesendet: Montag, 30. Juli 2012 22:43
An: solr-user@lucene.apache.org
Betreff: newbie question
Hi,
I have been able to set up the SOLR demo environment as described in SOLR
3.6.1 tutorial:
http://lucene.apache.org/solr/api-3_6_1/doc-files/tutorial.html.
Actually, I set it up while it was still SOLR
Hi,
I have been able to set up the SOLR demo environment as described in SOLR
3.6.1 tutorial:
http://lucene.apache.org/solr/api-3_6_1/doc-files/tutorial.html.
Actually, I set it up while it was still SOLR 3.6.0.
The developer I am working with has created a custom SOLR instance using
3.6.1 and h
Erick, I'll do that. Thank you very much.
Regards,
Jacek
On Tue, May 1, 2012 at 7:19 AM, Erick Erickson wrote:
> The easiest way is to do that in the app. That is, return the top
> 10 to the app (by score) then re-order them there. There's nothing
> in Solr that I know of that does what you want
The easiest way is to do that in the app. That is, return the top
10 to the app (by score) then re-order them there. There's nothing
in Solr that I know of that does what you want out of the box.
Best
Erick
On Mon, Apr 30, 2012 at 11:10 AM, Jacek wrote:
> Hello all,
>
> I'm facing this simple pr
Hello all,
I'm facing this simple problem, yet impossible to resolve for me (I'm a
newbie in Solr).
I need to sort the results by score (it is simple, of course), but then
what I need is to take top 10 results, and re-order it (only those top 10
results) by a date field.
It's not the same as sort=
any
> xslt transform or schema.
>
> thanks
> Ben
>
> -Original Message-
> From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
> Sent: Sunday, November 20, 2011 9:05 AM
> To: solr-user@lucene.apache.org
> Subject: Re: how to transform a URL (newbie question)
>
transform a URL (newbie question)
Ben,
Not quite sure how to interpret what you're asking here. Are you speaking
of the /browse view? If so, you can tweak the templates under conf/velocity
to make links out of things.
But generally, it's the end application that would take the results f
Ben,
Not quite sure how to interpret what you're asking here. Are you speaking of
the /browse view? If so, you can tweak the templates under conf/velocity to
make links out of things.
But generally, it's the end application that would take the results from Solr
and render links as appropria
I am a beginner to solr and need to ask the following:
Using the apache-solr example, how can I display an url in the xml document
as an active link/url in http? Do i need to add some special transform in
the example.xslt file?
thanks
Ben
-
No virus found in this message.
Checked by AVG - www
this message in context:
http://lucene.472066.n3.nabble.com/a-newbie-question-reagarding-keyword-count-in-each-document-tp3500489p3500489.html
Sent from the Solr - User mailing list archive at Nabble.com.
: If using CommonsHttpSolrServer query() method with parameter wt=json, when
: retrieving QueryResponse, how to do to get JSON result output stream ?
when you are using the CommonsHttpSolrServer level of API, the client
takes care of parsing the response (which is typically in an efficient
bina
:
http://lucene.472066.n3.nabble.com/Newbie-question-tp3413106p3413106.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Steve,
I've filed a new JIRA issue along with the patch, which can be found at
<https://issues.apache.org/jira/browse/LUCENE-3406>;.
Please let me know if you see any problem.
Thanks!
-Sid
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-question-ant
Whether multi-valued or token-streams, the question is search, not
(de)serialization: that's opaque to Solr which will take and give it to you as
needed.
paul
Le 25 août 2011 à 21:24, Zac Tolley a écrit :
> My search is very simple, mainly on titles, actors, show times and channels.
> Having
My search is very simple, mainly on titles, actors, show times and channels.
Having multiple lists of values is probably better for that, and as the
order is kept the same its relatively simple to map the response back onto
pojos for my presentation layer.
On Thu, Aug 25, 2011 at 8:18 PM, Paul Lib
Delimited text is the baby form of lists.
Text can be made very very structured (think XML, ontologies...).
I think the crux is your search needs.
For example, with Lucene, I made a search for formulæ (including sub-terms) by
converting the OpenMath-encoded terms into rows of tokens and querying
have come to that conclusion so had to choose between multiple fields with
multiple vales or a field with delimited text, gone for the former
On Thu, Aug 25, 2011 at 7:58 PM, Erick Erickson wrote:
> nope, it's not easy. Solr docs are flat, flat, flat with the tiny
> exception that multiValued fi
nope, it's not easy. Solr docs are flat, flat, flat with the tiny
exception that multiValued fields are returned as a lists.
However, you can count on multi-valued fields being returned
in the order they were added, so it might work out for you to
treat these as parallel arrays in Solr documents.
rom: syyang [mailto:syyan...@gmail.com]
> Sent: Wednesday, August 24, 2011 10:07 PM
> To: solr-user@lucene.apache.org
> Subject: Newbie question, ant target for packaging source files from
> local copy?
>
> Hi all,
>
> I am trying to package source files containing local changes
I know I can have multi value on them but that doesn't let me see that
a showing instance happens at a particular time on a particular
channel, just that it shows on a range of channels at a range of times
Starting to think I will have to either store a formatted string that
combines them or keep
You could change starttime and channelname to multiValued=true and use
these fields to store all the values for those fields.
showing.movie_id and showing.id probably isn't needed in a solr record.
On 8/24/11 7:53 AM, Zac Tolley wrote:
I have a very scenario in which I have a film and showin
I have a very scenario in which I have a film and showings, each film has
multiple showings at set times on set channels, so I have:
Movie
-
id
title
description
duration
Showing
-
id
movie_id
starttime
channelname
I want to know can I store this in solr so that I keep this stucture?
M
To: solr-user@lucene.apache.org
Cc: Robert Petersen
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagination then grouping
How do you know whether to provide a 'next' button, or whether you are
the end of your facet list?
On 6/1/2011 4:47
obert Petersen
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagination then grouping
How do you know whether to provide a 'next' button, or whether you are
the end of your facet list?
On 6/1/2011 4:47 PM, Robert Petersen wrote:
> I think f
apache.org/solr/SimpleFacetParameters#facet.offset
-Original Message-
From: Jonathan Rochkind [mailto:rochk...@jhu.edu]
Sent: Wednesday, June 01, 2011 12:41 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagin
, 2011 12:41 PM
To: solr-user@lucene.apache.org
Subject: Re: Newbie question: how to deal with different # of search
results per page due to pagination then grouping
There's no great way to do that.
One approach would be using facets, but that will just get you the
author names (as stored in f
Sent: Wednesday, June 01, 2011 11:56 AM
To: solr-user@lucene.apache.org
Subject: Newbie question: how to deal with different # of search results
per page due to pagination then grouping
Apologize if this question has already been raised. I tried searching
but
couldn't find the relevant pos
so much. And please let me know if you need more details not
provided here.
B
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-question-how-to-deal-with-different-of-search-results-per-page-due-to-pagination-then-grouping-tp3012168p3012168.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Newbie-question-how-to-deal-with-different-of-search-results-per-page-due-to-pagination-then-grouping-tp3012168p3012168.html
Sent from the Solr - User mailing list archive at Nabble.com.
rick Erickson [mailto:erickerick...@gmail.com]
Sent: Sunday, May 29, 2011 9:00 PM
To: solr-user@lucene.apache.org
Subject: Re: newbie question for DataImportHandler
This trips up a lot of folks. Sold just marks docs as deleted, the terms etc
are left in the index until an optimize is performed, or the s
seeing the old data in my new index.
>
> Doe Solr keeps a cached copy of the index somewhere?
>
> I hope I have described my problem clearly.
>
> Thanks in advance.
>
> --
> View this message in context:
http://lucene.472066.n3.nabble.com/newbie-question-for-DataImportHandler-tp2982277p2982277.html
> Sent from the Solr - User mailing list archive at Nabble.com.
niosi [mailto:antonio...@gmail.com]
Sent: Tuesday, May 24, 2011 4:43 PM
To: solr-user@lucene.apache.org
Subject: newbie question for DataImportHandler
Hi,
I am new to Solr; apologize in advance if this is a stupid question.
I have created a simple database, with only 1 table with 3 columns, id, name,
.
Thanks in advance.
--
View this message in context:
http://lucene.472066.n3.nabble.com/newbie-question-for-DataImportHandler-tp2982277p2982277.html
Sent from the Solr - User mailing list archive at Nabble.com.
"There is no current feature" is what I meant. Yes, it would be very
handy to do this.
I handled this problem in the DIH by creating two documents, both with
the same unique ID. The first doc just had the metadata. The second
document parsed the input with Tika, but had 'skip doc on error' set
On Nov 14, 2010, at 3:02pm, Lance Norskog wrote:
Yes, the ExtractingRequestHandler uses Tika to parse many file
formats.
Solr 1.4.1 uses a previous version of Tika (0.6 or 0.7).
Here's the problem with Tika and extraction utilities in general:
they are not perfect. They will fail on some
Yes, the ExtractingRequestHandler uses Tika to parse many file formats.
Solr 1.4.1 uses a previous version of Tika (0.6 or 0.7).
Here's the problem with Tika and extraction utilities in general: they
are not perfect. They will fail on some files. In the
ExtractingRequestHandler's case, there i
Thanks for all the responses.
Govind: To answer your question, yes, all I want to search is plain text
files. They are located in NFS directories across multiple Solaris/Linux
storage boxes. The total storage is in hundreds of terabytes.
I have just got started with Solr and my understanding is t
Another pov you might want to think about - what kind of search you want.
Just plain - full text search or there is something more to those text
files. Are they grouped in folders? Do the folders imply certain kind of
grouping/hierarchy/tagging?
I recently was trying to help somebody who had files
About web servers: Solr is a servlet war file and needs a Java web
server "container" to run. The example/ folder in the Solr disribution
uses 'Jetty', and this is fine for small production-quality projects.
You can just copy the example/ directory somewhere to set up your own
running Solr; th
Think of the data import handler (DIH) as Solr pulling data to index
from some source based on configuration. So, once you set up
your DIH config to point to your file system, you issue a command
to solr like "OK, do your data import thing". See the
FileListEntityProcessor.
http://wiki.apache.org/s
Hi Lance,
Thank you very much for responding (not sure how I reply to the group, so,
writing to you).
Can you please expand on your suggestion? I am not a web guy and so, don't
know where to start.
What is the difference between SolrJ and DataImportHandler? Do I need to set
up web servers on all
Using 'curl' is fine. There is a library called SolrJ for Java and
other libraries for other scripting languages that let you upload with
more control. There is a thing in Solr called the DataImportHandler
that lets you script walking a file system.
On Thu, Nov 11, 2010 at 8:38 PM, K. Seshadri Iye
Hi,
Pardon me if this sounds very elementary, but I have a very basic question
regarding Solr search. I have about 10 storage devices running Solaris with
hundreds of thousands of text files (there are other files, as well, but my
target is these text files). The directories on the Solaris boxes a
message in context:
http://lucene.472066.n3.nabble.com/Newbie-question-no-search-results-tp1416482p1422211.html
Sent from the Solr - User mailing list archive at Nabble.com.
More directly: if the 'Artikel' field is a "string", only the whole
string will match:
Artikel:"Kerstman baardstel".
Or you can use a wildcard: Kerstmann* or just Kerst*
If it is a "text" field, it is chopped into words and
q=Artikel:Kerstmann would work.
Gora Mohanty wrote:
On Sat, 4
On Sat, 4 Sep 2010 01:15:11 -0700 (PDT)
BobG wrote:
>
> Hi,
> I am trying to set up a new SOLR search engine on a windows
> platform. It seems like I managed to fill an index with the
> contents of my SQL server table.
>
> When I use the default *.* query I get a nice result:
[...]
> However wh
/Newbie-question-no-search-results-tp1416482p1416482.html
Sent from the Solr - User mailing list archive at Nabble.com.
1 - 100 of 218 matches
Mail list logo