hmmm......bit over my head I'm afraid. I've tried a different tack that maybe you can help me with. It's quite possible that I will be forced to take the data in MS Excel format (like attachment) so using the example that comes with the Spreadsheet::ParseExcel pm I can parse thru the worksheet and load the cell reference and value into a hash as you've suggested :
#!/usr/bin/perl -w
use strict;
use Spreadsheet::ParseExcel;
my $oExcel = new Spreadsheet::ParseExcel;
my %hash = ();
die "You must provide a filename to $0 to be parsed as an Excel file" unless
@ARGV;
my $oBook = $oExcel->Parse($ARGV[0]);
my($iR, $iC, $oWkS, $oWkC,$cell,$ref,$value);
print "FILE :", $oBook->{File} , "\n";
print "COUNT :", $oBook->{SheetCount} , "\n";
print "AUTHOR:", $oBook->{Author} , "\n"
if defined $oBook->{Author};
for(my $iSheet=0; $iSheet < $oBook->{SheetCount} ; $iSheet++)
{
$oWkS = $oBook->{Worksheet}[$iSheet];
print "--------- SHEET:", $oWkS->{Name}, "\n";
for(my $iR = $oWkS->{MinRow} ; defined $oWkS->{MaxRow} && $iR <=
$oWkS->{MaxRow} ; $iR++)
{
for(my $iC = $oWkS->{MinCol} ; defined $oWkS->{MaxCol} && $iC <=
$oWkS->{MaxCol} ; $iC++)
{
$oWkC = $oWkS->{Cells}[$iR][$iC];
#print "( $iR , $iC ) =>", $oWkC->Value, "\n" if($oWkC);
$value = $oWkC->Value;
$cell = $iR.",".$iC;
$hash{$cell}=$value;
}
}
}
foreach $ref("2,1") {
if (exists $hash{$ref}) {
print "\n\n\n----------------\n$ref spent
$hash{$ref}\n----------------\n\n";
}
}
but I'm still left with problem of isolating the column and row headers so
that I can load it to the DB correctly i.e. update record with value from
cell blahblah where cost_centre (column 0 values) = blahblah and cost_item
(row 0 values) = blahblah
Mark
----- Original Message -----
From: "James Edward Gray II" <[EMAIL PROTECTED]>
To: "Mark Martin" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, February 04, 2004 2:34 PM
Subject: Re: Uploading a CSV file to a Database Table
> On Feb 4, 2004, at 6:50 AM, Mark Martin wrote:
>
> > I'm pulling my hair out trying to find out how to upload this data. My
> > files can vary in numbers of rows and columns but the x and y axis
> > always contain the same type of metadata - in my case cost centre and
> > cost item. A sample of the data would look like :
> >
> > cost_centre,stationery,postage,furniture,training,advertising
> > 1001,�10.56,�8,�500.99,�1500,�300.99
> > 1002,�40.50,�12.35,�0,�0,�450
>
> Generally, when I have data like this, my favorite thing to do is to
> build a hash out of each row, then use whatever I want by name. Like
> this:
>
> my $header = <>;
> my @cols = split /,/, $header; # store column names for later use
>
> while (<>) {
> my @fields = split /,/;
>
> # next we load our hash
> my %record = map { ($cols[$_], $fields[$_]) } 0..$#fields;
>
> # and here we can use it
> print "$record{cost_centre} $record{stationery}\n"; # or whatever
> }
>
> From there you're problem is simply building an SQL statement and
> feeding it to the DBI. Is that enough to get you going?
>
> James
>
costings.xls
Description: MS-Excel spreadsheet
-- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] <http://learn.perl.org/> <http://learn.perl.org/first-response>
