https://bugs.kde.org/show_bug.cgi?id=291835
--- Comment #44 from Mark <mark...@gmail.com> --- (In reply to oli...@openbrackets.net from comment #43) > @Mark > > one more thought... > > if 1. above is OK, ie smbclient / smbc_read|write can support and FD > streaming approach, then, eventhough we are getting rid of the problem of > determining buffer size, is that going to solve the throughput issue? > > I don't understand the problem in great detail yet, but it seems to me that > because of the way the SMB protocol works, the way the libsmbclient API > needs to be called and fed data, is quite critical, ref > https://bugs.kde.org/show_bug.cgi?id=291835#c26 where wireshark forensics > show that smbclient/cifs use clever strategy of pre-fetching "one block > ahead" to get max throughput... > > How can sendfile / splice ever understand these semantics? > > Would we be getting rid of the "deciding what size buffer to use" problem, > while failing to address the actual problem of achieving highly optimised > throughput...? For all your questions: i don't know :) But i do know that sendfile/splice works marvelously on sockets. If i do an iperf benchmark between my computer and my locak server i get around 990mbit/sec. That is very fast for a 1gbit connection. It's hitting the maximum throughput, anything more is close to impossible due to TCP overhead. Now if i use sendfile/splice to copy a file over the same network i get between 900 and 950 mbit/sec. Close enough for me :) And that is with just those file descriptors and letting sendfile/splice handle whatever they want to handle. I don't know how it internally does smart stuff, but i do know it's blazingly fast. If all of this works with samba around id.. We will just have to try it out i guess. -- You are receiving this mail because: You are watching all bug changes.