Hi ,
Ya we need to upgrade but my question is whether reindexing of all cores is
required
or
we can directly use already indexed data folders of solr 3.3 to solr 3.4.
Thanks,
Isan Fulia.
On 19 September 2011 11:03, Wyhw Whon wrote:
> If you are already using Apache Lucene 3.1, 3.2 or 3.3,
I did some more testing and it seems that as soon as you use FileDataSource
it overrides any other dataSource.
http://www.server.com/rss.xml"; processor="XPathEntityProcessor"
forEach="/rss/channel/item" >
will not work, unless you remove FileDataSource. Anyone know a way to f
Hi List,
I tried Solr 3.4.0 today and while indexing I got the error
java.lang.RuntimeException: [was class java.io.CharConversionException]
Invalid UTF-8 middle byte 0x73 (at char #66611, byte #65289)
My earlier version was Solr 1.4 and this same document went into index
successfully. Looking ar
On Sep 18, 2011, at 19:43 , Michael Sokolov wrote:
> On 9/15/2011 8:30 PM, Scott Smith wrote:
>>
>> 2. Assuming that the answer to 1 is "correct", then is there an easy
>> way to take a lucene query (with nested Boolean queries, filter queries,
>> etc.) and generate a SOLR query string w
On Sep 18, 2011, at 21:52 , scorpking wrote:
> Hi Erik Hatcher-4
> I tried index from your url. But i have a problem. In your case, you knew a
> files absolute path (Dir.new("/Users/erikhatcher/apache-solr-3.3.0/docs").
> So you can indexed it. In my case, i don't know a files absolute path. I
>
Reindexing is not necessary. Drop in 3.4 and go.
For this sort of scenario, it's easy enough to try using a copy of your
directory with an instance of the newest release of Solr. If the
release notes don't say a reindex is necessary, then it's not, but always a
good idea to try it and run
yeah, i want to use DIH and i tried config my file dataconfig. but it is
wrong. This is my config:
*
Just in case, someone might be intrested here is the log
SEVERE: java.lang.RuntimeException: [was class
java.io.CharConversionException] Invalid UTF-8 middle byte 0x73 (at char
#66641, byte #65289)
at
com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.java:18)
at com.ctc.wstx.sr.
hi,
i was wondering if there is any method to get back the term vector list from
solr through solr.NET?
from the source code for SOLR.NET i couldn't notice any term vector parser
in SOLR.NET .
--
-JAME
> I did some more testing and it seems
> that as soon as you use FileDataSource
> it overrides any other dataSource.
>
>
> encoding="UTF-8"
> connectionTimeout="3" readTimeout="3"/>
> />
>
>
> rootEntity="false"
> url="http://www.server.com/rss.xml";
> processor="XPathEntityProcessor
Thanks Erick.
On 19 September 2011 15:10, Erik Hatcher wrote:
> Reindexing is not necessary. Drop in 3.4 and go.
>
> For this sort of scenario, it's easy enough to try using a copy of your
> directory with an instance of the newest release of Solr. If
> the release notes don't say a reindex
Yeah, naming datasources maybe only works when they are of the same type.
I got this to work with URLdatasource and
url="file:///${crawl.fileAbsolutePath}" (2 forward slashes doesn't work)
for the local files.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-it-possibl
Hi I am currently using the DIH to connect to and import data from a MS SQL
Server, and in general doing full, delta or deletes seems to work perfectly.
The issue is that I spotted some errors being logged in the tomcat logs for
SOLR which are :
19-Sep-2011 07:45:25 org.apache.solr.common.SolrE
Please include information about your heap size, (and other Java
command line arguments) as well a platform OS (version, swap size,
etc), Java version, underlying hardware (RAM, etc) for us to better
help you.
>From the information you have given, increasing your heap size should help.
Thanks,
Gl
On 9/19/2011 5:27 AM, Erik Hatcher wrote:
On Sep 18, 2011, at 19:43 , Michael Sokolov wrote:
On 9/15/2011 8:30 PM, Scott Smith wrote:
2. Assuming that the answer to 1 is "correct", then is there an easy way
to take a lucene query (with nested Boolean queries, filter queries, etc.) and g
Hi all-
I'm not sure if I should break this out into two separate questions to the list
for searching purposes, or if one is more acceptable (don't want to flood).
I have two (hopefully) straightforward questions:
1. Is it possible to expose the unique ID of a document to a DIH query? The
reas
Hi all,
while thinking about a migration plan of a Solr 1.4.1 master / slave
architecture (1 master with N slaves already in production) to Solr 3.x I
imagined to go for a graceful migration, starting with migrating only
one/two slaves, making the needed tests on those while still offering the
inde
The javabin versions are not compatible as well as the index format. I don't
think it will even work.
Can you not reindex the master on a 3.x version?
On Monday 19 September 2011 18:17:45 Tommaso Teofili wrote:
> Hi all,
> while thinking about a migration plan of a Solr 1.4.1 master / slave
> ar
Thanks Robert,
Removing "set" from " setMaxMergedSegmentMB" and using "maxMergedSegmentMB"
fixed the problem.
( Sorry about the multiple posts. Our mail server was being flaky and the
client lied to me about whether the message had been sent.)
I'm still confused about the mergeFactor=10 settin
Hi,
After little struggle figured out a way of joining xml files with database.
But for some reason it is not working. After the import, only the content
from xml is present in my index. Msql contents are missing.
To debug, I replaced the parametrized query with a simple select statement
and it
OK. Thanks for all of the suggestions.
Cheers
Scott
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Monday, September 19, 2011 3:27 AM
To: solr-user@lucene.apache.org
Subject: Re: Lucene->SOLR transition
On Sep 18, 2011, at 19:43 , Michael Sokolov wrote:
Hello,
I am running a simple test after reading:
http://wiki.apache.org/solr/UpdateJSON
I am only using one object from a large json file to test and see if
the indexing works:
curl 'http://localhost:8983/solr/update/json?commit=true'
--data-binary @productSample.json -H 'Content-type:application
Hello Everyone,
I'm quite curious about how does the following data get understood and
indexed by Solr?
[{
"id":"Fubar",
"url": null,
"regularPrice": 3.99,
"offers": [
{
"url": "",
"text": "On Sale",
"id": "OS"
}
]
}]
1) The field "id" is present as part of the main ob
Ok a little bit of deleting lines from the json file led me to realize
that Solr isn't happy with the following:
"offers": [
{
"url": "",
"text": "On Sale",
"id": "OS"
}
],
But as to why? Or what to do to remedy this ... I have no clue :(
- Pulkit
On Mon, Sep 19, 201
So I'm not an expert in the Solr JSON update message, never used it
before myself. It's documented here:
http://wiki.apache.org/solr/UpdateJSON
But Solr is not a structured data store like mongodb or something; you
can send it an update command in JSON as a convenience, but don't let
that mak
Hi
As we do join two or more tables in sql, can we join 2 or more indexes in solr
as well. if yes than in which version.
Regards
Ahsan
26 matches
Mail list logo