https://bugs.kde.org/show_bug.cgi?id=463708
Jack <ostrof...@users.sourceforge.net> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEEDSINFO |REPORTED Resolution|WAITINGFORINFO |--- --- Comment #8 from Jack <ostrof...@users.sourceforge.net> --- Importing 100 rows took under 3 seconds. I only let the whole file go about 7 minutes before killing it, so it may well have gone near or over 10 minutes.. My best guess is that the time to process each row is increasing linearly with the number of payees, because it is creating a new payee for each row (the text may be the same, but the digits at the end are different) and for each row, it needs to compare against all the existing payees to see if the payee already exists. I would personally call your sample csv is a pathological example (meaning it triggers a possibly problem behavior in the worst possible way.) To be sure, I would time the import of 100, 200, 300, ... rows to see how the time compares to number of rows. My personal data file, which has data for well over 10 years, has 950 accounts (including 723 stock accounts, many closed) but only 335 payees. Note I don't keep a separate payee for absolutely every different possible one - I have some like "Miscellaneous gas station" and "Miscellaneous grocery store" with individual ones only for Payees I frequently use. So - I see how that file can take an excessively long time to import, but I think it is because the file is not really a good example of real data. If you really do expect to be importing thousands of transactions at a time, with very little possibility of recognizing payees as already known, then you will end up with slow imports, and I can't think of any way to make the program faster at what it has to do. Note it might well be notably faster if you used shorter payee examples, as your samples are all over 200 characters. -- You are receiving this mail because: You are the assignee for the bug.