Merci beaucoup Frédéric.
Le 9 novembre 2011 21:52, Frédéric Cons a écrit :
> The CodecUtil.writeHeader signature has changed from
>
> public static DataOutput writeHeader(DataOutput out, String codec, int
> version)
>
> in lucene 3.4 (which is the method not found) to
>
> public static void writ
Solr 1.4 is doing great with respect to Indexing on a dedicated physical server
(Windows Server 2008). For Indexing around 1 million full text documents
(around 4 GB size) it takes around 20 minutes with Heap Size = 512M - 1G & 4GB
RAM.
However while using Solr on a VM, with 4 GB RAM it took 50
Hello,
I have some problems with my application. I have some fields and use edismax
to search between them. Now i want to configure that one field must match.
Let's give an example:
firstname lastname Nicknames
Lionel messi loe,pulga
When i search i want only results that m
Solr 1.4 is doing great with respect to Indexing on a dedicated physical server
(Windows Server 2008). For Indexing around 1 million full text documents
(around 4 GB size) it takes around 20 minutes with Heap Size = 512M - 1G & 4GB
RAM.
However while using Solr on a VM, with 4 GB RAM it took
org.apache.solr.handler.dataimport.DataImportHandlerException: 'baseDir'
value: url_example is not a directory Processing Document # 1
at
org.apache.solr.handler.dataimport.FileListEntityProcessor.init(FileListEntityProcessor.java:123)
I added onError="skip" and onError="continue" to all e
--- On Thu, 11/10/11, roySolr wrote:
> From: roySolr
> Subject: One field must match with edismax
> To: solr-user@lucene.apache.org
> Date: Thursday, November 10, 2011, 11:40 AM
> Hello,
>
> I have some problems with my application. I have some
> fields and use edismax
> to search between the
Hello,
I'm designing a solr web system and our system will have multiple
solr instances on a Tomcat.
According to solr wiki, an instruction to use single war file and
multiple context files (solr[1-2].xml).
http://wiki.apache.org/solr/SolrTomcat#Multiple_Solr_Webapps
I wonder why following stru
Hi Kurt,
I toke your fieldtype definition and could not reproduce your problem with solr
3.4.
But I think you have a problem with the ampersand in "A. J. Johnson & Co."
Two comments:
In your analysis html-example there is a gap of two positions between Johnson
and Co. This must not be ("A. J.
Hello everyone,
We have large index size in case norms are enabled.
schema.xml:
type declaration:
fields declaration:
For 5000 documents (every document has 2 unique fields, 2*5000=1
unique fields in index), index size is 48.24 MB.
But if we enable omitting norms
Using Solr 3.4.0. That changelog actually says it should reduce memory usage
for that version. We were on a much older version previously, 1.something.
Norms are off on all fields that it can be turned off on.
I'm just hoping this new version doesn't have any leaks. Does FastLRUCache vs
LRUCache
Hi,
I have a problem with facets on a grouped field.
Ex: SolR has 3 records, 2 records with the same RountripgroupCode and
CountryCode ("MA"), and 1 record with another roundtripgroupCode and another
CountryCode ("ID").
Because i use grouping, i should have only have 1 result in MA and 1 result
i
Steve,
do you have any custom code in your Solr?
We had out-of-memory errors just because of that, I was using one method to
obtain the request which was leaking... had not read javadoc carefully enough.
Since then, no leak.
What do you do after the OoME?
paul
Le 9 nov. 2011 à 21:33, Steve F
Karsten,
We're using solr.py for indexing.
Thanks for your suggestion, though. I will look into the indexing process to
see how the ampersands are being handled.
-Kurt
From: karsten-s...@gmx.de [karsten-s...@gmx.de]
Sent: Thursday, November 10, 2011 6:06
Hello all
I have gone through the tutorials of Solrj. now i want to create multi core
indexes through solrj but i am not getting clue , so can anybody post some
example code ?
Regards
Dhaivat
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multi-Core-indexed-using-SolrJ-t
You should create HttpSolrServer that works with a core. One
CommonsHttpSolrServer per core.
java snippet: final CommonsHttpSolrServer solrServer = new
CommonsHttpSolrServer("http://localhost:8080/solr"; + "/myCoreName");
The rest remains unchanged.
Thanks, Ivan
On Thu, 2011-11-10 at 16:25 +020
Dear list,
I've experienced a weird (unexpected?) behaviour concerning core reload
on a master instance.
My setup :
master/slave on separate hosts.
On the master, I update the schema.xml file, adding a dynamic field of
type random sort field.
I reload the master using core admin.
The new f
Please forgive my lack of knowledge; I'm posting for the first time!
I'm using solrindex to index and it appears all is going OK in that I'm
receiving the following for each segment:
2011-10-30 20:18:06,870 INFO solr.SolrIndexer - SolrIndexer: starting
2011-10-30 20:18:06,993 INFO indexer.Indexe
Thanks Ivan,
Is there any specific method using which i can create core and add documents
in it ?
Regards
Dhaivat
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multi-Core-indexed-using-SolrJ-tp3496830p3496869.html
Sent from the Solr - User mailing list archive at Nabbl
To create a core please take a look at
org.apache.solr.client.solrj.request.CoreAdminRequest.
To index documents try:
SolrServer#add(Collection);
SolrServer#commit();
On Thu, 2011-11-10 at 16:39 +0200, dhaivat wrote:
> Thanks Ivan,
>
> Is there any specific method using which i can create core
Hi,
Can you elaborate on exactly is the problem? Which response time are you
talking about - the time for the req.process() or the measured time before the
document is visible in search?
Which SolrJ SolrServer class are you using?
Please tell us more about your environment, documents, index siz
if (solr.data.dir system property is set) {
the index files will be there.
} else {
they are at ${solr.solr.home}/data directory
}
I hope it helps.
On Thu, Nov 10, 2011 at 9:37 AM, John wrote:
> Please forgive my lack of knowledge; I'm posting for the first time!
>
> I'm using solrind
Hi all,
Sorry for the in convenience caused if to anyone but I need reply for
following.
I want to work in Solr and for the same I downloaded it and started to
follow the instruction provided in the Tutorial available at
"http://lucene.apache.org/solr/tutorial.html"; to execute some examples
fir
Did you start the server (
*java -jar start.jar*
)? Was it successful? Have you checked the logs?
Am 10.11.2011 17:54, schrieb dsy99:
Hi all,
Sorry for the in convenience caused if to anyone but I need reply for
following.
I want to work in Solr and for the same I downloaded it and started
Always best to give the version of Solr you are using out of the gate.
How large is your index?
It may be that sometimes background merges are triggered, and sometimes they
are not.
Also, some bugs around the autocommit code (commitWithin shares a lot of code
with autocommit) have recently bee
Whenever Solr 4 is released.
What's on trunk now (mostly the search side) is pretty stable and has been used
in production environments.
We are working on the indexing side now on a branch. It's not stable, but we
are working on it currently. https://issues.apache.org/jira/browse/SOLR-2358
- M
What version of Solr?
Try /solr/collection1/admin/index.jsp even if it's single core.
What does your solr.xml say?
- Mark Miller
lucidimagination.com
On Oct 26, 2011, at 9:41 AM, Fred Zimmerman wrote:
> It is not a multi-core setup. The solr.xml has null value for . ?
> HTTP ERROR 404
>
> P
Assuming that there aren't too many of these, you can use
facet.query=field:banana&facet.query=field:oranges etc,
repeated as many times as you need.
This gets pretty awkward if you have more than a dozen or so
facets, but you might be able to get some mileage out of
defining these as defaults in
If I have 2 masters in a master-master setup, where one master is the
live master and the other master acts as backup in slave mode, and
then during failover the slave master accepts new documents, such that
indexes become out of sync, how can original live master index get
back into sync with the
Thanks for the reply. There are many keyword terms (<1000?) and not sure if
Solr would choke on a query string that long. Perhaps solr is not built to
handle this type of internal re-indexing.
Thank you.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Keyword-counts-tp349
This post helped me solve my problem which I had posted on StackOverflow.
See my answer here:
http://stackoverflow.com/questions/8070742/solr-multivalued-field-how-can-i-return-documents-where-all-values-in-the-fiel
http://stackoverflow.com/questions/8070742/solr-multivalued-field-how-can-i-retur
What's the version of the source you are using?
Can you send a minimum full test class demonstrating the issue instead?
Makes it easier to give it a try.
- Mark Miller
lucidimagination.com
On Nov 7, 2011, at 5:35 AM, Ronak Patel wrote:
>
>
> Hi,
>
>
> I am trying to write JUnit Test Code
There's really nothing in the Solr architecture that automaticaly
reindexes anything, you have to feed docs to Solr.
You could write a custom search component that tacked this
data on to the response packet at whatever granularity you
required. It's not actually as hard as it sounds and you wouldn
Sounds strange. Did you do >>>java -jar start.jar<<< on the console?
Am 10.11.2011 18:19, schrieb dsy99:
Yes I executed the server "start.jar" embedded in example folder but not
getting any message after that. I checked to logs also.it is empty.
On Thu, 10 Nov 2011 22:34:57 +0530 wrote
On Nov 7, 2011, at 12:06pm, Chris Hostetter wrote:
>
> : I see that https://issues.apache.org/jira/browse/SOLR-653 removed this
> : support from SolrJ, because it was deemed too dangerous for mere
> : mortals.
>
> I believe the concern was that the "novice level" API was very in your
> face
Try replacing "localhost" with your domain or ip address and make sure the
port is open. Use the ps command to see if java is running.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unable-to-getting-started-with-SOLR-tp3497276p3497583.html
Sent from the Solr - User mailing l
lol. "It's not actually as hard as it sounds". When I understand what you
said, then I may agree. :-)
Thanks again!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Keyword-counts-tp3491955p3497594.html
Sent from the Solr - User mailing list archive at Nabble.com.
Do Solr's LRU caches pay attention to hitcount when deciding which
entries to age out and use for autowarming, or is it purely based on the
last time that entry was touched? Is it a reasonable idea to come up
with an algorithm that uses hitcount along with entry age, ideally with
a configurabl
On Nov 10, 2011, at 1:36 PM, Ken Krugler wrote:
>> : I see that https://issues.apache.org/jira/browse/SOLR-653 removed this
>> : support from SolrJ, because it was deemed too dangerous for mere
>> : mortals.
That seems silly. It should just be well documented. At worst marked expert.
Yuck.
I
I've searched high and low for this via Google and the Solr wiki page, and
cannot find the answer.
My problem:
---
I'm having trouble getting the elevate.xml working. Here's what it currently
looks like:
>From what I understand, this should work. Accordin
On 11/10/2011 11:55 AM, Shawn Heisey wrote:
Do Solr's LRU caches pay attention to hitcount when deciding which
entries to age out and use for autowarming, or is it purely based on
the last time that entry was touched? Is it a reasonable idea to come
up with an algorithm that uses hitcount alon
From: Paul Libbrecht
>To: solr-user@lucene.apache.org
>Sent: Thursday, November 10, 2011 7:19 AM
>Subject: Re: Out of memory, not during import or updates of the index
>
>do you have any custom code in your Solr?
>We had out-of-memory errors just because of that, I was using one method to
>obta
From: "Husain, Yavar"
>To: "solr-user@lucene.apache.org"
>Sent: Thursday, November 10, 2011 3:43 AM
>Subject: Solr Indexing Time
>
>However while using Solr on a VM, with 4 GB RAM it took 50 minutes to index at
>the first time. Note that there is no Network delays and no RAM issues. Now
>when I
From: Andre Bois-Crettez
>To: "solr-user@lucene.apache.org"
>Sent: Thursday, November 10, 2011 7:02 AM
>Subject: Re: Out of memory, not during import or updates of the index
>
>You can add JVM parameters to better trace the heap usage with
>-XX:+PrintGCDetails -verbose:gc -Xloggc:/your/gc/logfil
what is the point of a unique indexed field?
If for all of your fields, there is only one possible document, you
don't need length normalization, scoring, or a search engine at all...
just use a HashMap?
On Thu, Nov 10, 2011 at 7:42 AM, Ivan Hrytsyuk
wrote:
> Hello everyone,
>
> We have large in
On Thu, Nov 10, 2011 at 7:42 AM, Ivan Hrytsyuk
wrote:
> For 5000 documents (every document has 2 unique fields, 2*5000=1
> unique fields in index), index size is 48.24 MB.
You might be able to turn this around and encode the "unique field"
information in a multi-valued field:
For example, in
How big is your index?
What kind of queries do you tend to see? Do you facet on a lot of fields? Sort
on a lot of fields?
Before you get the OOM and are running along nicely, how much RAM is used?
On Nov 9, 2011, at 3:33 PM, Steve Fatula wrote:
> We get at rare times out of memory errors durin
(11/11/11 4:15), Michael Herchel wrote:
I've searched high and low for this via Google and the Solr wiki page, and
cannot find the answer.
My problem:
---
I'm having trouble getting the elevate.xml working. Here's what it currently
looks like:
Fro
From: Mark Miller
>To: solr-user
>Sent: Thursday, November 10, 2011 3:00 PM
>Subject: Re: Out of memory, not during import or updates of the index
>
>How big is your index?
>
>The total for the data dir is 651M.
>What kind of queries do you tend to see? Do you facet on a lot of fields? Sort
>o
Has anyone had any success/experience with building a HBase datasource
for DIH? Are there any solutions available on the web?
Thanks.
solar.data.dir is set, but the files aren't in that location. I've checked
the logs, and I don't see any errors. Obviously something is wrong, but I
can't find any indications as to what. Anyone have suggestions?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Locating-index-
I failed to mention that the segments* files were indeed created; it is the
other files that are missing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Locating-index-files-tp3496865p3498692.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am using Lily for atomic index updates ( implemented very nice;
transactionally; plus MapReduce; plus auto-denormaluzing)
http://www.lilyproject.org
It slows down "mean time" 7-10 times, but TPS still the same
- Fuad
http://www.tokenizer.ca
Sent from my iPad
On 2011-11-10, at 9:59 PM, M
No solutions to the problem?
OK. I'll look for the changes in source code and if I succeed I'll share it
here for feedback.
Thanks
On Tue, Nov 8, 2011 at 5:06 PM, Samarendra Pratap wrote:
> Hi Chris,
> Thanks for the insight.
>
> 1. "omitTermFreqAndPositions" is very straightforward but if I
I just entered a bug: https://issues.apache.org/jira/browse/SOLR-2891
Thanks & regards, Edwin
On Nov 7, 2011, at 8:47 PM, Chris Hostetter wrote:
>
> : finally I want to use Solr highlighting. But there seems to be a problem
> : if I combine the char filter and the compound word filter in combi
54 matches
Mail list logo