Erik's version might be better with tabs though to avoid CSV's
requirements on escaping comas, quotes, etc. And maybe trim those
fields a bit either in awk or in URP inside Solr.

But it would definitely work.

Regards,
   Alex.
----
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 28 August 2015 at 12:39, Erik Hatcher <erik.hatc...@gmail.com> wrote:
> How about this incantation:
>
> $ bin/solr create -c fw
> $ echo "Q36" | awk -v OFS=, '{ print substr($0, 1, 1), substr($0, 2, 2) }' | 
> bin/post -c fw -params "fieldnames=id,val&header=false" -type text/csv -d
> $ curl 'http://localhost:8983/solr/fw/select?q=*:*&wt=csv'
> val,_version_,id
> 36,1510767115252006912,Q
>
> With a big bunch of data, the stdin detection of bin/post doesn’t work well 
> so I’d certainly recommend going to an intermediate real file (awk... > 
> data.csv ; bin/post … data.csv) instead.
>
>
> —
> Erik Hatcher, Senior Solutions Architect
> http://www.lucidworks.com
>
>
>
>
>> On Aug 28, 2015, at 3:19 AM, timmsn <tim.hammac...@web.de> wrote:
>>
>> Hello,
>>
>> i use Solr 5.2.1 and the bin/post tool. I try to set the index of some files
>> they have a fixed length and no withespace to seperate the words.
>> How can i Programm a Template or so for my fields?
>> Or can i edit the schema.xml for my Problem?
>>
>> This ist one record from one file, in this file are 40 - 100 records.
>>
>> AB134364312   58553521789       245678923521234130311G11222345610711MUELLER,
>> MAX -00014680Q1-24579021-204052667980002 EEUR          0223/123835062
>> 130445
>>
>>
>> Thanks!
>>
>> Tim
>>
>>
>>
>> --
>> View this message in context: 
>> http://lucene.472066.n3.nabble.com/Indexing-Fixed-length-file-tp4225807.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>

Reply via email to