Im experiencing the issue described in here, however, am using Solr 8.0.0:
https://issues.apache.org/jira/browse/SOLR-11616
I can't create backups using the backup API, one or two minutes after I
start the backup, it throws the following NoSuchFileException exception,
with a different file each t
single data-config.xml file for all the
environments?
EXTERNAL
On 6/12/2019 7:46 PM, Hugo Angel Rodriguez wrote:
> Thanks Shawn for your answers
>
> Regarding your question: " Are these environments on separate Solr instances,
> separate servers, or are they on the same Solr instan
g.xml file for all the
environments?
EXTERNAL
On 6/12/2019 9:05 AM, Hugo Angel Rodriguez wrote:
> I need to configure a single data-config.xml file in solr for SAS AML
> 7.1. I have three environments: Development, quality and production,
> and you know the first lines in a data-con
Hi
I need to configure a single data-config.xml file in solr for SAS AML 7.1. I
have three environments: Development, quality and production, and you know the
first lines in a data-config.xml file is for connection to a database (database
name, database server, port, user, password, etc). Accord
That’s correct - the original source of my data which I was crawling had
160 as space. This took a while to find. :) Solr is working fine. Thank
you !
On Tue, 20 Nov 2018 at 1:28, Shawn Heisey wrote:
> On 11/19/2018 3:31 PM, Angel Todorov wrote:
> > the *real* issue is that SOLR
like this "Some Text".
But why does SOLR do this ?
Thanks
On Mon, Nov 19, 2018 at 11:50 PM Angel Todorov wrote:
> The only thing that works is this: {!term f=MyCustomField}Some Text
>
> Thanks
>
>
> On Thu, Nov 15, 2018 at 7:13 PM Erick Erickson
> wrote:
>
>
> > Try comparing strings char by char. White spaces are sometimes
> unprintable characters.Eric.Sent from my Samsung Galaxy smartphone.
> > Original message From: Angel Todorov <
> attodo...@gmail.com> Date: 2018-11-15 04:06 (GMT-05:00) To:
> solr-user@lucen
t;:"MyCustomField:\"Some Text\"",
"parsedquery":"MyCustomField:Some Text",
"parsedquery_toString":"MyCustomField:Some Text",
"explain":{},
"QParser":"LuceneQParser",
Thank you
On Thu, Nov 15, 2
Hi guys,
I have SOLR 6.5 , and a custom defined field which is of type string (not
text or text_general). In some document, there is the value for that field,
for example, "Some Text" . When I query by myFieldName:"Some Text" , I
don't get any matches, but I think I should, because this matches th
ed to configure). But I think it dont solve the needed because It
define a paret/child relationship.
Do you have some clue to investigate? Thank you for your help!
*Angel** Adrián Addati*
2018-06-26 11:31 GMT-03:00 Erick Erickson :
> bq. I don't know if the best approach is combine
d metadata.
Regards...
* - - -*
*Angel** Adrián Addati*
2018-06-26 10:50 GMT-03:00 Erick Erickson :
> From your problem description, it looks like you want to gather the
> data from the DB and filesystem and combine them into a Solr document
> at index time, then index that document.
>
Hello,
Wondering whether the Luke handler's response for the lastModified field
refers to a hard commit only, or any that has happened last, including a
soft commit?
Thank you
Angel
ou need one extra...
>
> On Thu, Aug 24, 2017 at 6:59 AM, Angel Todorov
> wrote:
>
> > I also tested, of course, by setting a value of 0, expecting that it
> would
> > work in the way I expect it to , but unfortunately - it doesn't. Nothing
> is
> > committed in tha
I also tested, of course, by setting a value of 0, expecting that it would
work in the way I expect it to , but unfortunately - it doesn't. Nothing is
committed in that case.
Thanks
On Thu, Aug 24, 2017 at 1:54 PM, Angel Todorov wrote:
> Hi all,
>
> I have thi
s is 1, it behaves as if it is 2. If it is set to 2, I need to do 3
updates, and only after the third one my changes are visible for searching.
Is this a bug?
Thanks,
Angel
rand new suggester
implementation. Since most people are using Google as the "example", it
should work as it works there.
Thanks again,
Angel
On Tue, Jul 25, 2017 at 12:00 PM, alessandro.benedetti wrote:
> I think this bit is the problem :
>
> "I am using a Shi
after the StandardTokenizer, not sure if
that has anything to do with it.
Thanks,
Angel
On Tue, Jul 25, 2017 at 12:09 AM Rick Leir wrote:
> Angel,
> The 20 byte is an ASCII space character, which is a separator in most
> contexts. Breaking the buffer at spaces, you can see 6 non-space toke
question is about the error in general - why is it occurring? I only
have English text, nothing special.
Thanks,
Angel
types
of results with a single query.
Thanks
Angel
ude "video", for example:
http://localhost:8080/solr//suggest?suggest=true&suggest.dictionary=mySuggester&wt=json&suggest.q=*"video
g"*
{"responseHeader":{"status":0,"QTime":48},"suggest":{"mySuggester":{"\"v
quot; or "br".
Do you see anything wrong with my setup ? Thank you
Angel
On Mon, Jun 26, 2017 at 6:04 PM, govind nitk wrote:
> Hi Alessandro,
>
> Thanks for clarification.
>
>
>
> On Mon, Jun 26, 2017 at 4:53 PM, alessandro.benedetti <
> a.benede...@sease.io&
lts, it's just that the results
are not what I'd expect.
Would greatly appreciate if you can guide me to the right config.
Thanks,
Angel
?
Thanks
Angel
uery that's executed.
Angel
On Fri, May 22, 2015 at 1:03 AM, Erick Erickson
wrote:
> bq: Which is logical as index growth and time needed to put something
> to it is log(n)
>
> Not really. Solr indexes to segments, each segment is a fully
> consistent "mini index".
&
ping below 300.
I should probably experiment with sharding those documents to multiple SOLR
cores - that should help, I guess. I am talking about something like this:
https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
Thanks,
Angel
On Thu, May 21, 2015 at 11:36
27;ve tried changing mergeFactor,
autowarmCounts, and the buffer sizes - to no avail.
I am using SOLR 5.1
Thanks !
Angel
and _ suffixes to all fields in my schema,
for the above to work? I mean, do i need to have title, title_en, title_jp,
and so on - manually defined in the schema? I still don't understand why a
document isn't added at all, without any error being thrown.
Thank you,
Angel
Hello Sujatha,
have you tried to leave the quotes out? :-)
Alternatively try using 'id:1.0' to see
if the same error arises.
A bit more information on the Update issue (exact query sent and all the
log corresponding entries) would be needed to help you with your problem.
Cheers
on of
DocValue-Fields could be the solution needed in this case.
Thanks again to both of you and Toke for the feedback!
Cheers
Angel
On 05.03.2014 17:06, Shawn Heisey wrote:
On 3/5/2014 4:40 AM, Angel Tchorbadjiiski wrote:
Hi Shawn,
On 05.03.2014 10:05, Angel Tchorbadjiiski wrote:
Hi Shawn,
I
Hi Shawn,
On 05.03.2014 10:05, Angel Tchorbadjiiski wrote:
Hi Shawn,
It may be your facets that are killing you here. As Toke mentioned, you
have not indicated what your max heap is.20 separate facet fields with
millions of documents will use a lot of fieldcache memory if you use the
On 05.03.2014 11:51, Toke Eskildsen wrote:
On Wed, 2014-03-05 at 09:59 +0100, Angel Tchorbadjiiski wrote:
On 04.03.2014 11:20, Toke Eskildsen wrote:
Angel Tchorbadjiiski [angel.tchorbadjii...@antibodies-online.com] wrote:
[Single shard / 2 cores Solr 4.6.1, 65M docs / 50GB, 20 facet fields
else
helps.
Thanks a lot,
Angel
Hi Toke,
thank you for the mail.
On 04.03.2014 11:20, Toke Eskildsen wrote:
Angel Tchorbadjiiski [angel.tchorbadjii...@antibodies-online.com] wrote:
[Single shard / 2 cores Solr 4.6.1, 65M docs / 50GB, 20 facet fields]
The OS in use is a 64bit linux with an OpenJDK 1.7 Java with 48G RAM
the jetty container:
-XX:+UseCompressedOops
-XX:+UseCompressedStrings
-XX:+OptimizeStringConcat
-XX:+UseStringCache
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=75
-XX:MaxTenuringThreshold=1
-XX:SurvivorRatio=8
-XX:+CMSParallelRemarkEnabled
-XX:+UseConcMarkSweepGC
-XX:+UseParNewG
eter high enough (35000km?) would render the behaviour
you want to have, as this would include any point on earth:-)
Cheers
Angel
Hi below is my java program for indexing around 30million records from
csv.But this doen't work for such a large file.It works perfectly fro
smaller files.What's the wrong with my code.please let me know
try{
/*SolrServer server = new
CommonsHttpSolrServer("http://localhost
I had difficulties getting this to work, so hopefully this will help others
having the same issue.
My environment:
Solr 3.1
MySQL 5.0.77
Schema:
DIH data-config:
I kept getting build errors similar to this:
org.apache.solr.common.SolrException:
org.apache.lucene.spat
>
> On Jun 24, 2010, at 12:32 AM, Eric Angel wrote:
>
>> I'm using solr 4.0-2010-06-23_08-05-33 and can't figure out how to add the
>> spatial types (LatLon, Point, GeoHash or SpatialTile) using
>> dataimporthandler. My lat/lngs from the database are in separ
I'm using solr 4.0-2010-06-23_08-05-33 and can't figure out how to add the
spatial types (LatLon, Point, GeoHash or SpatialTile) using dataimporthandler.
My lat/lngs from the database are in separate fields. Does anyone know how to
do his?
Eric
are not analyzed, the 'h' character in the query
"hésita*" does NOT get removed during query time. This means that unless the
original token was preserved in the field it wouldn't find any matches.
This helps?
Cheers
Avlesh
On Tue, Oct 6, 2009 at 2:02 PM, Angel Ice wr
query-time.
If you want to enable wildcard queries, preserving the original token (while
processing each token in your filter) might work.
Cheers
Avlesh
On Mon, Oct 5, 2009 at 10:39 PM, Angel Ice wrote:
> Hi everyone,
>
> I have a little question regarding the search engine when a wildcar
Hi everyone,
I have a little question regarding the search engine when a wildcard character
is used in the query.
Let's take the following example :
- I have sent in indexation the word Hésitation (with an accent on the "e")
- The filters applied to the field that will handle this word, result i
TIKA (you can use
AutoDetectParser)
and then,
SolrInputDocument doc = new SolrInputDocument();
doc.addField("DOC_CONTENT", CONTENT);
solrServer.add(doc);
soltServer.commit();
On Wed, Sep 2, 2009 at 5:26 PM, Angel Ice wrote:
> Hi everybody.
>
> I hope it's the righ
nux utility like
Curl and the PDF/Word/RTF/PPT/XLS etc. will be indexed. We tested this last
week.
Tika has already been included in Solr 1.4.
Cheers
Rajan
On Wed, Sep 2, 2009 at 5:26 PM, Angel Ice wrote:
> Hi everybody.
>
> I hope it's the right place for questions, if not sor
Hi everybody.
I hope it's the right place for questions, if not sorry.
I'm trying to index rich documents (PDF, MS docs etc) in SolR/Lucene.
I have seen a few examples explaining how to use tika to solve this. But most
of these examples are using curl to send documents to Solr or an HTML POST wi
45 matches
Mail list logo