https://bugs.kde.org/show_bug.cgi?id=291835
Harald Sitter <sit...@kde.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sit...@kde.org --- Comment #57 from Harald Sitter <sit...@kde.org> --- I did some research.... Let's start with the important thing: KIO does not dictate the request size. KIO requests a read from smbc of up-to a given amount. smbc internally will break that amount into concurrent network requests (libsmb_context.c, clireadwrite.c). The actual amount of requests is calculated based on server capabilities. In other words: smbc may not have an async API, that doesn't stop it from fulfilling a single client read requests with numerous network requests to the server. Of note here is that the server's capabilities will severely impact the concurrency and thus throughput. e.g. If you use a server that only speaks SMB1 and/or doesn't have the necessary capabilities you'll generally see much worse performance, and there is nothing to be done about that. Looking at the SMB2+ scenarios exclusively it does however mean the the larger the request size KIO uses the higher the throughput. If you request a 1G read of a 1G file you may well get that back in a single read call at near ideal performance. And while that would seem attractive, it isn't. We need progress reporting and the larger the request size -> the less reads -> the less updates we can give to progress. i.e. the transfer dialog would be broken. As a result we'd probably want no less than filesize/100 or maybe 50 requestSize so as to update every percent or two. Indeed when increasing the buffer size to filesize/50 you'll probably see fairly good performance. On my tests against a windows10 server and a 1G file that's as folows: Win -> Win: 100-110 MiB/s Win -> mount: ~108 MiB/s (~9.59s) Win -> KIO-current: ~58 MiB/s (~18s) Win -> KIO-dynamic-request-size: 70-85 MiB/s (~12.54s) So far that doesn't look bad, now the sync API gets in the way though. Each read effectively is a blocking chain of - read() - write() - emit progress() Meaning write() will directly impact throughput because the next batch of read requests cannot be sent to the server until the read loop wraps around. IOW: we do not "queue" the next read request with the server until after we've written the current one. A quick concurrency hack to mitigate that with a threaded r/w queue suggest the impact of that is actually considerable. Win -> KIO-dynamic-request-size+threaded-write: ~106 MiB/s (~9.74s) That seems about as efficient as this can get considering we need to drag the data through user space. So we'll probably want a smarter size calculation (+cap at some reasonable value, because this will impact ram usage) + a circular buffer between read() and write() so we can "buffer" data. -- You are receiving this mail because: You are watching all bug changes.