Getting a little off topic, but you mentioned using the script as documentation:

>> Indeed, a reasonably well written NAnt script could even (with a small
>> stretch of imagination) be considered to be the documentation.

>> Even if you aren't familiar with the structure of NAnt, something like
>> the following is reasonably self-documenting:
>> <target name="install" depends="install-vs.net, install-nunit,
>> install-something-else"/>

You can take this a step further by using an XSLT to transform the nant build 
file into HTML or whatever, so with a little more
structure, your script could document itself, and even post that to an intranet 
somewhere or what have you.

I have one that generates wiki markup from the build file for the internal 
wiki.  I haven't figured out how to automate posting it
to the wiki, but it's a start.

Thanks,
Ryan 
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Troy Laurin
Sent: Tuesday, October 18, 2005 8:35 PM
To: Anderson, Kelly
Cc: nant-users@lists.sourceforge.net
Subject: Re: [Nant-users] Install Tools

On 10/18/05, Anderson, Kelly <[EMAIL PROTECTED]> wrote:
> If you just expect the tools to be installed, then you have a job
> setting up a new machine, virtual machine or whatever. If you do this
> often, it might be worth automating, and it might be as easy to write a
> NAnt script to do this as to document the process.

Indeed, a reasonably well written NAnt script could even (with a small
stretch of imagination) be considered to be the documentation.

Even if you aren't familiar with the structure of NAnt, something like
the following is reasonably self-documenting:
<target name="install" depends="install-vs.net, install-nunit,
install-something-else"/>

Of course, it requires silent installs to be supported by all of the
tools you are using.  Or just extracting an archive to a directory,
for tools that don't require any kind of registration.

> A similar question comes up as to whether to do a clean before a build
> every time, or just some of the time (say the nightly build) or just
> when manually requested.

A bit of a tangent, but regarding cleans in a nightly (or CI, or other
regular) build... Every now and then the structure of a project may
change.  To cater for this kind of change, I found that the most
reliable order of execution was:

* Trigger condition for a build (CI change, or nightly schedule)
* Run the clean target
* Update the build script (and the rest of the source tree)
* Run the build (test, dist, etc) target
* Build reporting

Note that this requires calling the build twice... once before the
build script is updated, and again after updating the build script. 
This ensures that the project is always cleaned in the same way that
it was built - if you update the script before cleaning, then you are
cleaning the new structure and old resources may still be hanging
around, for example if you change the name of the folder that
resources are built into.

> These clean builds are useful from time to time
> especially if files have been deleted from revision control, and you
> want to make sure everything still works, but they can also be very slow
> for a Continuous Integration environment.

I personally think that it's very important for a Continuous
Integration build to be as definitive as possible.  That is, if a
developer checks in some changes and triggers a build, if he gets a
"build successful" message then that needs to mean that his/her
changes didn't break anything (that is covered by automated tests).

If a clean is only performed every five builds, for exampe, then a
developer may get a "build successful" after removing a method that is
used from a different project because the clean target wasn't run...
then a few builds later, some other developer gets the "build failed"
message, but can't understand it because it involves code that s/he
didn't touch... or even worse, it *does* involve code s/he touched,
and so wastes time trying to find out what s/he did wrong when the
build was actually broken by someone else.

It's also very important that CI doesn't become a crutch, or an excuse
for developers not to test on their own machine, and perform code
reviews, and other such excellent practices... but developers are
people too, and make mistakes, and CI should be there to catch those
mistakes, and so (IMHO) should be as comprehensive as possible.


If the CI build starts to take too long, there are techniques for
shortening the process... if the unit tests take too long to run, then
change-triggered builds could run a subset of "smoke tests" which
quickly test large parts of the system, and the full test suite could
be left to an overnight build.  If the solution is growing too large
and the compile stage is taking too long, then you could look at
separating out various projects (that perhaps change less frequently)
and treating them as libraries, so they are built separately and just
referenced as part of the main build.

I guess my suggestion is that CI (or automated builds) are more useful
when they are comprehensive even at the expense of speed, than when
they are fast but do not offer confidence.


Regards, and thanks for reading this far ;-)

--
Troy


-------------------------------------------------------
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
_______________________________________________
Nant-users mailing list
Nant-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nant-users



-------------------------------------------------------
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
_______________________________________________
Nant-users mailing list
Nant-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nant-users

Reply via email to