I like the idea of an expand button. We'll probably use something like
that. On the other hand, the primary use case of the Forge quality scores
is that of a user trying to evaluate and choose modules, so we're highly
focused on that. We probably won't confuse it too much with specific
details for the author. HOWEVER.....

By end-of-year, I'm writing a score preview extension for the new PDK as a
demo of the plugin system so as a module author, you'll get that feedback
during the development phase, even before publishing it. Yay, look at that.
A plugin system so easy that a product manager can use it, and a tool that
gives you early feedback, right when you need it the most!

On Thu, Nov 4, 2021 at 10:14 AM Nick Maludy <[email protected]> wrote:

> Ben,
>
> As you describe the summary scores for new users that makes sense. One UI
> idea i just had as i read your reply was the following:
>
> - Give an overall summary score "4.3" or something
>   - Have a "+" or ">" button that can expand that score and show the
> components that went into it
>   - For each line item show how much each line was worth. This way you can
> total up the score.
>   - Also show the things that were missing and how many points they were
> worth. This way module developers know what needs to be fixed/improved in
> order to raise their score.
>
> Just an idea I had and wanted to "brain dump" while it was fresh.
>
> -Nick
>
> On Thu, Nov 4, 2021 at 12:56 PM Ben Ford <[email protected]> wrote:
>
>> On Thu, Nov 4, 2021 at 8:11 AM Gene Liverman <[email protected]>
>> wrote:
>>
>>> After reading through all the replies, there are bits and pieces from
>>> many people that seem to cover most of my opinions. I am going to try and
>>> collect those here:
>>>
>>> What about this idea? Instead of trying to "measure quality" as a
>>>> metric, maybe try to expose individual checks or metrics about various
>>>> aspects of a module and let the user decide "is this high quality for me,
>>>> or not"?
>>>>
>>> I like this idea. Present a reasonable set of data about things
>>> different users may care about, or might should care about, along with a
>>> link to some docs explaining why people care about the listed things.
>>>
>>
>> This makes it real easy for Puppet experts to read, but doesn't do much
>> for new or casual users. This is why we turned the detail view off
>> temporarily on the Forge. Innocuous warnings were unfairly frightening
>> users away from decent modules without high scores. Our roadmap for the
>> rest of the year includes working on a more user friendly view that will
>> reintroduce the details in a more comprehensible way. Some of the score
>> explanations are being driven by this very conversation!
>>
>>
>> Regarding unit tests, I find the utilization of rspec-puppet-facts
>>> <https://github.com/voxpupuli/rspec-puppet-facts> (and thereby facterdb
>>> <https://github.com/voxpupuli/facterdb>) to be a must. I have this
>>> opinion for two reasons:
>>> 1) as a maintainer, it ensures that my tests work for all the things I
>>> have listed in metadata.json (or at least those supported by the gems)
>>> which is great in general and especially important when the supported OS
>>> lists gets modified.
>>> 2) as a user, if I am looking into how a module works it helps me see
>>> that the maintainer is testing across all the needed OS's quickly and
>>> without having to read every line of every spec test looking for OS combos
>>> that I care about.
>>>
>>
>> This is an interesting point. Maybe I'll expand the scope of this just a
>> bit to ask a more meta question. If we're programatically assigning a
>> quality score, do we think that it's a good idea to give points for
>> adhering to a "standard" testing toolchain? eg, puppetlabs_spec_helper,
>> facterdb, pdk, etc.
>>
>> And if we do, then what about modules in which facterdb doesn't actually
>> provide any benefits? A module that doesn't use facts doesn't need to test
>> with different factsets. How would we distinguish between those cases?
>>
>>
>> As a user, I want to see that there are at least minimal tests covering
>>> public bits - aka at least a "it { is_expected.to compile.with_all_deps
>>> }" run via rspec-puppet-facts on each supported os. I prefer to see more
>>> but also understand that many people who write puppet code are not nearly
>>> as comfortable writing tests.
>>>
>>
>> I'm inclined to say that the bar is that a spec file exists for each
>> manifest. (Ewoud's use case of defining the tests in a single loop instead
>> could be handled by him putting in a file for each path with just a comment
>> explaining where the test actually lives, or something similar.) It would
>> be better to use rspec-puppet test coverage, but I don't think we're ready
>> to actually run the tests on module publication yet. (future improvement?)
>>
>>
>>
>>> Regarding integration tests, I love to see them but it takes a lot more
>>> knowledge to write them than it does to write a good puppet module. I would
>>> love to see straight away that a module has them (and that CI executes
>>> them) but wouldn't hold it against an author that they don't have any.
>>>
>>
>> What if the view included a list of platforms the module has acceptance
>> tests for. Informational only, rather than affecting the overall quality
>> score. This would obviously only know the standard testing toolchain(s), of
>> course, but I think this is doable.
>>
>>
>> Personally, I find having a module documented with puppet-strings to be
>>> critical for two reasons:
>>> 1) it provides lots of useful information within the source code of the
>>> module
>>> 2) it enables the programmatic generation of a REFERENCE.md file that
>>> can then be read on GitHub/GitLab and rendered on the Forge.
>>>
>>> Examples can also be contained within this and there by be referenced by
>>> users in either location too. I think README.md should have a very minimal
>>> set of examples in it. Most examples should be kept closer to what they are
>>> describing via puppet-strings IMO.
>>>
>>> Speaking of the README.md, I think looking for select key sections would
>>> be worthwhile. I think it should contain the following at a minimum:
>>> - an H1 title at the top
>>> - badges
>>>   - that show the version released on the Forge and link to the module
>>> on the Forge
>>>   - build status
>>>   - license (ideally via the shields.io badge that reads the license
>>> file)
>>> - an H2 Usage section
>>> - an H2 Reference section that contains at least text referencing
>>> REFERENCE.md
>>> - an H2 Changelog section that at least contains text referring to
>>> CHANGELOG.md
>>>
>>
>> Sounds like a puppet-readme-lint tool to me! We can improve the spec
>> <https://puppet.com/docs/puppet/latest/modules_documentation.html> and
>> test for adherence to it. We could even consider integrating with
>> https://errata-ai.github.io/vale-server/docs/style or some such.
>>
>>
>> One other thing I wish there was a good way to flag on, maybe as part of
>>> metadata-json-lint, is when author, summary, license, source, project_page,
>>> and issues_url are not filled out in an expected format (or absent all
>>> together).
>>>
>>
>> We can absolutely improve metadata-lint to include whatever checks we
>> think are useful. Probably a good first step would be a formal spec for
>> that file 😜
>>
>> _._,_._,_
> ------------------------------
> Groups.io Links:
>
> You receive all messages sent to this group.
>
> View/Reply Online (#413) <https://groups.io/g/voxpupuli/message/413> | Reply
> To Group
> <[email protected]?subject=Re:%20Re%3A%20%5Bvoxpupuli%5D%20Do%20you%20have%20opinions%20on%20what%20module%20quality%20means%3F>
> | Reply To Sender
> <[email protected]?subject=Private:%20Re:%20Re%3A%20%5Bvoxpupuli%5D%20Do%20you%20have%20opinions%20on%20what%20module%20quality%20means%3F>
> | Mute This Topic <https://groups.io/mt/86662086/6177302> | New Topic
> <https://groups.io/g/voxpupuli/post>
> Your Subscription <https://groups.io/g/voxpupuli/editsub/6177302> | Contact
> Group Owner <[email protected]> | Unsubscribe
> <https://groups.io/g/voxpupuli/leave/10372716/6177302/638900109/xyzzy> [
> [email protected]]
> _._,_._,_
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CACkW_L7DyJGeC-KvR1wRyzevg-5mmZkX06BV5sQBw6UV7kYDQA%40mail.gmail.com.

Reply via email to