Hi ! I realized that i didn't put the result of the tests i did, so here they are, if it can help someone else ! Treating the maya part (getting / setting the weights) and the IO part (writing / reading the file) separately, i tried : *Maya : * cmds.skinPercent MFnSkinCluster.setWeights get/setAttr (maya.cmds) get/setAttr (MPlug) I get the best results with get/setAttr MPlug. I don't remember the nb of vertices of my test mesh, but i guess it was the same than in my previous email (i.e. 39k) and it was around 1.4s for importing weights (which is the tricky part), against 4.7s with maya.cmds. This time includes the time to read the file (as i was interested by the entire operation, i didn't take time to format the code and separate my timers). Exporting is roughly similar between cmds and mplug (less than 1s for the same 10k mesh). *IO : * I finally went for a json dict, and the weight entry is compressed using cPickle. It seemed to be the fastest way, allowing me to keep everything in one file, easily editable and understandable ! There must be better options (e.g. using zlib, or hdf5, although i'd like to keep something native), i'll try to see that more in depth later ! Thanks a lot for your help, anyway !
Le jeudi 13 octobre 2016 13:35:36 UTC-7, fruity a écrit : > > Hi Marcus ! > > Thanks for your answer&help ! Well, i'm still working on the optimisation, > i used json for the readable info (influences, etc), and cPickle for the > weights array. But i think most of the optimisation should now come from > how to export / import the values to the vertices. > Exporting is not that expensive (0.777652025223s for 39k vertices and 2 > influences), but importing is still 4.7600607872s. It seems there are > different ways of reading/writing weights, and it takes some time to try > all of them ! For now, it seems that skinPercent is definetly the worst > idea (about 29sec for importing ^^), and i read that the MFnSkinCluster is > not necessarily the best option, at least using getWeights() and > setWeights() (http://www.macaronikazoo.com/?p=417). The fastest may be > based on querying the values via api plugs. > Long story short, there are a lot of ways of doing it, so i need to try > all of them, but i think the part i need to work on is more the maya part > than the 'data' part ? > hdf5 looks great (i wish i could have a look at the book you mentionned on > stackOverflow, too late now... ;-), but it's not native (because of the use > of numpy ?), unfortunately. I'm not really informed on alembic > possibilities and what you can or can't do with it, but it's definetly > something i want to investigate, looks super powerful ! > thanks for the help ! > > Le jeudi 13 octobre 2016 14:12:20 UTC+2, Marcus Ottosson a écrit : >> >> Hey @fruity, how did it go with this? Did you make any progress? :) >> >> I came to think of another constraint or method which to do what you’re >> after - in regards to random access. That is, being able to query weights >> for any given vertex, without (1) reading it all into memory and (2) >> physically searching for it. >> >> There’s a file format called HDF5 <https://support.hdfgroup.org/HDF5/> >> which was designed for this purpose (which has Python bindings as well). >> It’s written by the scientific community, but applies well to VFX in that >> they also deal with large datasets of high precision (in this case, >> millions of vertices and floating point weights). To give you some >> intuition for how it works, I formulated a StackOverflow question >> <https://stackoverflow.com/questions/22125778/how-is-hdf5-different-from-a-folder-with-files> >> >> about it a while back that compares it to a “filesystem in a file” that has >> some good discussion around it. >> >> In more technical terms, you can think of it as Alembic. In fact Alembic >> is a “fork” of HDF5, which was later rewritten (i.e. “Ogawa >> <https://github.com/alembic/alembic/tree/master/lib/Alembic/Ogawa>“) but >> maintains (to my knowledge) the gist of how things are organised and >> accessed internally. >> >> At the end of the day, it means you can store the results of your weights >> in one of these hdf5 files, and read it back either as you would any normal >> file (i.e. entirely into memory) or random access - for example, if you’re >> only interested in applying weights to a selected area of a highly dense >> polygonal mesh. Or if you have multiple “channels” or “versions” of weights >> within the same file (e.g. 50 gb of weights), you could pick one without >> requiring all that memory to be readily available. >> >> > -- You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/e1d4a87a-65fb-411a-8f3f-ac3ab69c85cf%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
