Dear Solr users,
Every Friday I need to delete some documents on my solr db (around
100~200 docs).
Could you help me to choose the best way to delete these documents.
- I have the unique ID of each documents
Another question:
How can I disable to possibility to do:
http://localhost:8983/solr
Hii,
I'm trying to configure crawl-anywhere 3.0.3 version in my local system..
i'm following the steps from the page
http://www.crawl-anywhere.com/installation-v300/
but, crawlerws is failing and throwing the below error message in the
brower
http://localhost:8080/crawlerws/
1
i have a large Arabic Text File that contains Tweets each line contains one
tweet , that i want to index in solr such that each line of this document
should be indexed in a separate solr document
what i tried so far :
i know how to SQL databse records in solr
i know how to change solr schema to f
Hi,
The best is if you could find a query for all docs you want to remove. If
this is not simple you can use the following syntax: id: (1 2 3 4 5) to
remove group of docs by ID (and if your default query operator is OR).
Regards.
On 27 January 2013 11:47, Bruno Mannina wrote:
> Dear Solr users
Hi,
Even If I have one or two thousands of Id ?
Thanks
Le 27/01/2013 13:15, Marcin Rzewucki a écrit :
Hi,
The best is if you could find a query for all docs you want to remove. If
this is not simple you can use the following syntax: id: (1 2 3 4 5) to
remove group of docs by ID (and if your d
You can write a script and remove say 50 docs in 1 call. It's always better
than removing 1 by 1.
Regards.
On 27 January 2013 13:17, Bruno Mannina wrote:
> Hi,
>
> Even If I have one or two thousands of Id ?
>
> Thanks
>
> Le 27/01/2013 13:15, Marcin Rzewucki a écrit :
>
>> Hi,
>>
>> The best i
yep ok thks !
Le 27/01/2013 13:27, Marcin Rzewucki a écrit :
You can write a script and remove say 50 docs in 1 call. It's always better
than removing 1 by 1.
Regards.
On 27 January 2013 13:17, Bruno Mannina wrote:
Hi,
Even If I have one or two thousands of Id ?
Thanks
Le 27/01/2013 13:1
This is actualy showing it works.
crawlerws is used by Crawl Anywhere UI and will pass it the correct
arguments when needed.
SivaKarthik wrote
> Hii,
> I'm trying to configure crawl-anywhere 3.0.3 version in my local system..
> i'm following the steps from the page
> http://www.crawl-anywher
Hi Mark,
I see no such issues in Solr 4.1. It seems to work fine.
Thanks.
On 24 January 2013 03:58, Mark Miller wrote:
> Yeah, I don't know what you are seeing offhand. You might try Solr 4.1 and
> see if it's something that has been resolved.
>
> - Mark
>
> On Jan 23, 2013, at 3:14 PM, Marcin
Hi:
I am very excited to announce the availability of Apache Solr 3.6.2 with
RankingAlgorithm30 1.4.3 with realtime-search support. realtime-search
is very fast NRT and allows you to not only lookup a document by id but
also allows you to search in realtime, see
http://tgels.org/realtime-nrt.
Thanks Marcin. I found your post via a google search and there was no reply
attached to it so I thought no one replied. Apologies and thanks again.
On Sat, Jan 26, 2013 at 6:48 PM, Marcin Rzewucki wrote:
> Hi,
>
> Actually Mark Miller replied to this issue and it seems to be fixed in Solr
> 4.1
Before Solr 4.0, I secure solr by enable password protection in Jetty.
However, password protection will make solrcloud not work.
We use EC2 now, and we need the www admin interface of solr to be
accessible (with password) from anywhere.
How do you protect your solr sever from unauthorized acces
You can define a security filter in WEB-INF\web.xml, on specific url
patterns.
You might want to set the url pattern to "/admin/*".
[find examples here:
http://stackoverflow.com/questions/7920092/how-can-i-bypass-security-filter-in-web-xml
]
On Sun, Jan 27, 2013 at 8:07 PM, Mingfeng Yang wrote:
Hi Bruno,
Why don't you write deletePkQuery to delete these documents and set your cron
to run delta query on every Friday?
Regards
Harshvardhan Ojha
-Original Message-
From: Bruno Mannina [mailto:bmann...@free.fr]
Sent: Sunday, January 27, 2013 6:03 PM
To: solr-user@lucene.apache.org
S
Hi Shawn,
Thanks for your reply. After following your suggestions we were able to
index 30k documents. I have some queries:
1) What is stored in the RAM while only indexing is going on? How to
calculate the RAM/heap requirements for our documents?
2) The document cache, filter cache, etc...are p
On 1/27/2013 10:28 PM, Rahul Bishnoi wrote:
Thanks for your reply. After following your suggestions we were able to
index 30k documents. I have some queries:
1) What is stored in the RAM while only indexing is going on? How to
calculate the RAM/heap requirements for our documents?
2) The docume
Hi guys,
what is relation between number of indexed fields and searching speed?
For example I have same number of records, same searching SOLR query but 100
indexed fields for each record in case 1 and 1000 fields in case 2. I's
obvious that searching time in case 2 will be greater, but how much?
17 matches
Mail list logo