Hello Martin, On Wednesday 01 of April 2015 21:08:19 Martin Galvan wrote: > On Wed, Apr 1, 2015 at 8:01 AM, Pavel Pisa <p...@cmp.felk.cvut.cz> wrote: > > To Martin Galvan: What is your opinion? Would you join the work in > > this direction? > > So if I understood correctly, what you did was converting the > datasheet PDF to some parseable format, then you parsed that file to > extract the registers and bit info and save it as JSON files, which > you feed to a separate Python script that generates .h files with all > the necessary #defines.
the solution is to define simple JSON format which can be even prepared or edited by hand which holds all information. Unfortuantelly, header files are bad for manual editing and according to Joel and Sebastian preferences we need three defines per each field. And I agree that this is reasonable solution to have consistent and standard approach for header files across different chips. > While the approach is indeed interesting, I don't think it's the most > convenient way to solve this problem. Here's why: > > 1) The parse script seems to be highly datasheet-dependant, thus > making it non-portable for a different board. The script will break > even if the datasheet for the same board changes in the slightest bit > (or rather, the parsable file generated from the PDF). It is horrible amount of the hand work and used tools and macro helpers are not included in the repo. Ask Premek for more detailed description of his approach. I have success with simple pdftotext and sed for some other chips to generate thousands lines long header files. So that was my initial motivation. But unfortuantelly, Ti manual is prepared by multiple people and or tool for almost each peripheral so all straightforward solutions have proved to be unusable. Python script is for cleaning generated JSONs and then for generating final header files from these JSONs. But even writting JSONs by hand is simpler than writing headers and JSONs can be easily read for check, debugging and other support. Header files are bad format for that and if use overlay structures then their manual writing from offsets is quite error prone. > 2) The cyclomatic complexity of the parse script is quite high, making > it hard to maintain. > > Even if the parse script stays in the rtems-tms570-utils repo, I still > think that this approach doesn't scale because of how difficult it > would be to build a separate parse script for any future boards, just > so you can get the JSON files. It would be way easier to simply write > the headers by hand as they're needed. > > I agree that writing large header files by hand is a drag, which is > why we have tools like Halcogen in the first place. I understand > there's a licensing issue here, but just how much different can a > register definition file be? You can license code, but you can't We have Matlab/Simulink code for almost broad range of peripherals on the chip but all that for HalCoGen and Ti tool chains. Really, to be usesfull one day for RETEMS, we have to have available at least structures for all peripherals registers. > license facts, and most of these files end up being just a mirror of > what's on the datasheet. I've seen this approach being followed on > other projects such as FreeBSD, where the headers used by some drivers > are a carbon copy of the ones used in Linux. > > The same goes for generating the boot/initialization code using the > script approach. I don't know if such a thing is even possible; but > I'm sure it'll be quite complex. And we still don't know for certain > how reliable this whole approach is. No, boot code has to be written from scratch but with help of manual and HalCoGen generated code as algorithms reference. But it is not so large amount of code at the end. But it is quite complex. As for FlexRay and other peripherals support, which we have supported, these does not use so much of HalCoGen code. Main work is in the drivers logic which is not helped by HalCoGen so much. > And then there's the timing issue. Given how much effort it has taken > to get to this point, I don't think it's worth to invest even more > time trying to follow the same approach for init code. You mentioned 8 > months as a cap; there's got to be a faster way to fix this. The project has not been so actively worked on. Premysl Houdek has some other duties and subjects etc. > I suggest we go for a simpler, faster approach and refactor the > Halcogen-generated code. It's almost the same as following the > datasheet by hand, only a bit easier on the developer. We can remove > most of the hooks Halcogen generates, get rid of some redundant > typedefs, and even find and fix some bugs (such as this one: > http://e2e.ti.com/support/microcontrollers/hercules/f/312/p/390756/1380267) >. As long as the refactor is being done consciously, I don't see how there > could be a licensing issue. Please, try to negotiate some permissive license which allows headers inclusion and or try to negotiate machine processing and use of HalCoGen generated headers copy (and better even generated sources) in GPL licensed project. I have tried and failed. And we have been quite unhappy and aware of amount of additional work caused by that. I have checked how HalCoGen XML MCU description files looks like and it would be possible to generate header files from them directly without use of the HalCoGen tool. But even that last resort solution has been declared as not suggested and that it would not be blessed by Ti's lawyers department. Ti support people said, that mechanism and licenses are decided (may it be even negotiated with tools vendors) at a time when MCU is released and the changing of license is highly problematic even for Ti after that point. But you can be more sucesfull if you declare budget and projected production quantities. I have seen that even Arctic Core GPL sources use their own hand written sources for their small subset of the supported peripherals. So they did not dare to use Ti generated headers either. The talk with Ti suport worth even if you are not sucesfull because if they register demand and multiple requests then it can help support people to influence process for future chips headers and tools decision. I.e. Ti completely changed policy for MSP430 support where they provide GCC support as the first class citizen and they pay directly Red Hat to mainline and maintain GCC support. http://www.ti.com/tool/msp430-gcc-opensource They have not been cooperative for years in this area in the past. But even that I like MSP430 and we have used it in medical products, I think that the move came late unfortuantelly and that most of potential open community moved to ARM Cortex M0 etc. because there have been much less amount of problems with freely available tools support for ARM. > We're willing to work on refactoring the Halcogen code ourselves. I am happy that you consider this as a one possible way. > If you already have the whole set of generated headers ready and tested > then feel free to send a patch and we'll test it on our side. I > skimmed through the code and saw a couple of issues here and there, > such as "ui32_t" being used instead of uint32_t (is this a typo?) but > I think we can work with it as long as the macro names are the same as > those used on the datasheet. This is initial test of generator. We have not used the headers to compile anything yet. But these typos should be easy to correct, because it is problem of single line in the Python script. The similar approach to generate headers is used for years in the project with aim to (finally) enable Nvidia, Adreno and other GPUs open source support (Nouveau, Freedreno etc.). https://github.com/envytools/envytools rnndb But this system is more complex. JSON is simpler to maintain by hand than XML. And there is no problem to convert our JSONs to envytools "rnndb" compatible XML if preferred in future if that format is preferred. Best wishes, Pavel _______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel