Hi Aaron,

Please find attached a small (and definitely imperfect/ugly/hacky) callback 
plugin that I wrote for ansible, It produces xUnit output from ansible playbook 
runs and integrates very nicely into Jenkins (xUnit -> Check Version N/A 
http://check.sourceforge.net)

It should be copied to {{ ansible_dir }}/ansible/lib/ansible/callback_plugins/

I use it for TDD of my own infrastructure/code and use it in a pattern very 
similar to what John linked to below. Complex tests are implemented as scripts 
(rolename/files), and "test suites" are bundled into roles 
(roles/test_server_live, roles/test_server_logon, etc).

I've found that it works really well when integrating it into my build 
processes on Jenkins to catch errors and test for expected behaviour at an 
infrastructure/functional level. Of course it is up to you to creatively use 
ansible to do the tests. I use a combination of modules and scripts wrapping 
3rd party tools, and rely on the return code to trigger "success" and "failure" 
in ansible.

Another note, I have ignore_errors peppered throughout the playbooks so that 
all relevant tests can run, ansible's conditionals, etc make this a very 
powerful setup, here is a short example:

- name: Check vagrant user logon (should fail)
  local_action: shell roles/test_auth_local/files/check_vv.sh 
  ignore_errors: yes 

I hope that this is a step in the right direction, with any luck someone can 
polish the plugin and integrate it into the ansible codebase. Otherwise if 
there is interest, please indicate as much and I'll try to get it into github 
for a proper pull request or to maintain it separately.

Kind regards,
Stephan



p.s. Humble apologies to all the Python developers for the ugly hack :)


On 11 Dec 2013, at 8:00 PM, John Dewey <[email protected]> wrote:

> Hi Aaron -
> 
> I too would find it useful to have the ability to “unit test” my tasks.  
> However, I have opted to create a testing playbook [1] which handles 
> integration testing.  It is not perfect, but allows for TDD/BDD, and 
> integrate into CI gating.
> 
> [1] https://github.com/blueboxgroup/ursula/tree/master/playbooks/tests/tasks
> On Wednesday, December 11, 2013 at 9:54 AM, Aaron Hunter wrote:
> 
>> I see your point but I'm not sure I agree. "Unit testing" may not be the 
>> best term for it but it's not too far off. Consider this case instead: 
>> installing and configuring a DHCP server. I have my senior admin write down 
>> exactly what the DHCP deployment should look like. This is how success will 
>> be measured. Testing this will include checking file permissions (which is 
>> also security issue), processes are running and with the right settings, 
>> logging , and a whole set of functionality based on the config file of the 
>> DHCP server. 
>> 
>> Another admin (let's say a junior admin) writes the Ansible code to 
>> accomplish this. When the code passes the test, it is done. Certainly this 
>> crosses into what is traditionally called functional testing. Testing IT 
>> infrastructure is somewhat different from testing a new software 
>> development.  I would still test what is in the Ansible script for the DHCP 
>> server because the junior admin may have gotten it wrong and plus its still 
>> code and you always test code. Plus I would have the Ansible scripts 
>> initially written in a development environment with one suite of tests. 
>> These tests will be very obtrusive and are not the same I would use to test 
>> a successful deployment. 
>> 
>> To be sure, the most important part is testing that the DHCP server works as 
>> expected (i.e., testing the config file) which is not an Ansible issue. 
>> Still I would test everything.  The issue for me is what to use to write the 
>> tests? Cucumber? JBehave? Plain Python? I know plenty of tools designed to 
>> test custom apps/code, I don't think the IT testing tools have caught up 
>> yet. 
>> 
>> Maybe this isn't even an Ansible issue. I was just speculating on how it 
>> might include automated  TDD-like test process in the framework.
>> 
>> ---
>> Aaron
>> DevOps Blog: http://www.sharknet.us
>> 
>> On Wednesday, December 11, 2013 11:55:40 AM UTC-5, Michael DeHaan wrote:
>>> 
>>> 
>>> Yes, this is super easy to already do today, basically just call your tests 
>>> at the end as the last step of your playbook.
>>> 
>>> Executing arbitrary python code is possible, but you can use ansible 
>>> modules like get_url and fail and so on.
>>> 
>>> If you want to push a python script, the 'script' module is awesome for 
>>> that.
>>> 
>>> Many of users have tests integrated with their continuous deployment 
>>> process so it will fail the rolling update block before moving on the to 
>>> next, thus not taking more machines out of rotation.
>>> 
>>> However, if you feel you have to test the file module, seriously, you're 
>>> wasting time -- if the file module doesn't work for you, how good is the 
>>> product?  It would be much better to test instead for something functional, 
>>> like whether your web service is operational, rather than duplicating all 
>>> the basics of Ansible in, as you say, arbitrary python code just to make 
>>> sure Ansible works.
>>> 
>>> There's a difference between unit and integration testing, and also in 
>>> testing a live deployment.
>>> 
>>> Unit tests are things you run on development code.
>>> 
>>> 
>>> 
>>> 
>>> On Wed, Dec 11, 2013 at 11:39 AM, Aaron Hunter <[email protected]> wrote:
>>>> I come from an Agile software development background in which test driven 
>>>> development (TDD) is the norm. As I write Ansible scripts, I'd like some 
>>>> way of testing them. In principle, I want to test every command in a 
>>>> playbook. For example, if one of my command changes the user permissions 
>>>> on a file, I want a test that independently confirms that it has in fact 
>>>> done so. I don't see a "test" module but I may have missed it.
>>>> 
>>>> Is that something that Ansible may offer some day? I'm thinking of the 
>>>> Ansible equivalent to unit testing. I believe it would require the ability 
>>>> to execute arbitrary Python code in the test. The Java tests I have 
>>>> written could certainly be very complex. 
>>>> 
>>>> I'm also curious what others do for testing using Ansible. What 
>>>> frameworks, etc.
>>>> 
>>>> Thanks,
>>>> Aaron
>>>> DevOps Blog: http://www.sharknet.us
>>>> 
>>>> 
>>>> -- 
>>>> You received this message because you are subscribed to the Google Groups 
>>>> "Ansible Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>> email to [email protected].
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>> 
>>> 
>>> 
>>> -- 
>>> Michael DeHaan <[email protected]>
>>> CTO, AnsibleWorks, Inc.
>>> http://www.ansibleworks.com/
>>> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> For more options, visit https://groups.google.com/groups/opt_out.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Ansible Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

p.s. Humble apologies to all the Python developers for the ugly hack :)


On 11 Dec 2013, at 8:00 PM, John Dewey <[email protected]> wrote:

Hi Aaron -

I too would find it useful to have the ability to “unit test” my tasks.  However, I have opted to create a testing playbook [1] which handles integration testing.  It is not perfect, but allows for TDD/BDD, and integrate into CI gating.

On Wednesday, December 11, 2013 at 9:54 AM, Aaron Hunter wrote:

I see your point but I'm not sure I agree. "Unit testing" may not be the best term for it but it's not too far off. Consider this case instead: installing and configuring a DHCP server. I have my senior admin write down exactly what the DHCP deployment should look like. This is how success will be measured. Testing this will include checking file permissions (which is also security issue), processes are running and with the right settings, logging , and a whole set of functionality based on the config file of the DHCP server.

Another admin (let's say a junior admin) writes the Ansible code to accomplish this. When the code passes the test, it is done. Certainly this crosses into what is traditionally called functional testing. Testing IT infrastructure is somewhat different from testing a new software development.  I would still test what is in the Ansible script for the DHCP server because the junior admin may have gotten it wrong and plus its still code and you always test code. Plus I would have the Ansible scripts initially written in a development environment with one suite of tests. These tests will be very obtrusive and are not the same I would use to test a successful deployment.

To be sure, the most important part is testing that the DHCP server works as expected (i.e., testing the config file) which is not an Ansible issue. Still I would test everything.  The issue for me is what to use to write the tests? Cucumber? JBehave? Plain Python? I know plenty of tools designed to test custom apps/code, I don't think the IT testing tools have caught up yet.

Maybe this isn't even an Ansible issue. I was just speculating on how it might include automated  TDD-like test process in the framework.

---
Aaron
DevOps Blog: http://www.sharknet.us

On Wednesday, December 11, 2013 11:55:40 AM UTC-5, Michael DeHaan wrote:

Yes, this is super easy to already do today, basically just call your tests at the end as the last step of your playbook.

Executing arbitrary python code is possible, but you can use ansible modules like get_url and fail and so on.

If you want to push a python script, the 'script' module is awesome for that.

Many of users have tests integrated with their continuous deployment process so it will fail the rolling update block before moving on the to next, thus not taking more machines out of rotation.

However, if you feel you have to test the file module, seriously, you're wasting time -- if the file module doesn't work for you, how good is the product?  It would be much better to test instead for something functional, like whether your web service is operational, rather than duplicating all the basics of Ansible in, as you say, arbitrary python code just to make sure Ansible works.

There's a difference between unit and integration testing, and also in testing a live deployment.

Unit tests are things you run on development code.




On Wed, Dec 11, 2013 at 11:39 AM, Aaron Hunter <[email protected]> wrote:
I come from an Agile software development background in which test driven development (TDD) is the norm. As I write Ansible scripts, I'd like some way of testing them. In principle, I want to test every command in a playbook. For example, if one of my command changes the user permissions on a file, I want a test that independently confirms that it has in fact done so. I don't see a "test" module but I may have missed it.

Is that something that Ansible may offer some day? I'm thinking of the Ansible equivalent to unit testing. I believe it would require the ability to execute arbitrary Python code in the test. The Java tests I have written could certainly be very complex.

I'm also curious what others do for testing using Ansible. What frameworks, etc.

Thanks,
Aaron
DevOps Blog: http://www.sharknet.us


--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--
Michael DeHaan <[email protected]>
CTO, AnsibleWorks, Inc.
http://www.ansibleworks.com/


--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

# (C) 2013, Stephan Buys, <[email protected]>
#
# This file is based on noop.py which is a part of Ansible, and is
# a derivative work as per the GPL. All original conditions apply.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.

import os
import shutil
import datetime
import time
import yaml
import glob

class CallbackModule(object):

    tmpfolder = "./tmp_xunit_out/"
    filename = "xunit.xml"
    current={}
    results={}
    init = False
    start = None

    fmt = '%Y-%m-%d %H:%M:%S'

    def delta(self,str):
        return 0
        (dt,fractions) = str.split('.')
        x = time.strptime(dt,'%H:%M:%S')
        since_last = datetime.timedelta(hours=x.tm_hour,minutes=x.tm_min,seconds=x.tm_sec)
        since_last_in_seconds = since_last.days * 24 * 60 * 60 + since_last.seconds
        return since_last_in_seconds

    def writeline(self,key,value):

        if type(value) == int:
            if value == 0.0:
                value = "0"
            else:
                value = str(value)

        value = value.strip('\n')
        value = value.strip('\r')

        file = self.current['file']
        file = open(file, 'ab+')

        file.write(key + ": ")
        if value.find(':')!=-1:
        #    file.write('"' + value + '"')
            pass
        else:
            file.write(value)
        file.write('\n')
        file.close()


    def __init__(self):
        pass

    def on_any(self, *args, **kwargs):

        if len(args) == 0:
            return

        if type(args[0]) != str:
            return
        
        if len(args) == 1:
            return


        if 'name' in self.current:


            self.current['host'] = args[0]

            details = args[1]

            if details:

                if 'invocation' in details:
                    module = details['invocation']

                    self.writeline('module_name',module['module_name'])
                    self.writeline('module_args', module['module_args'])
                    self.current['details'] = details



        pass

    def runner_on_failed(self, host, res, ignore_errors=False):
        self.writeline('host',host)
        if 'delta' in res:
	         self.writeline('run_time',self.delta(res['delta']))

        self.writeline('result','failed')
        if 'stdout' in res:
            self.writeline('stdout','"' + res['stdout'] + '"')

        if 'stderr' in res and res['stderr'] != "":
            self.writeline('stderr','"'+ res['stderr'] + '"')

        if 'failcount' in self.results:
            self.results['failcount'] = self.result['failcount'] + 1
        else:
            self.results['failcount'] = 1



    def runner_on_changed(self, host, res):
        self.writeline('host',host)
        self.writeline('run_time',self.delta(res['delta']))
        self.writeline('result','changed')


    def runner_on_ok(self, host, res):

        if 'delta' in res:
            self.writeline('run_time',self.delta(res['delta']))

        if 'invocation' in res:
            if res['invocation']['module_name'] == 'setup':
                return

        self.writeline('host',host)
        self.writeline("result","ok")
        pass

    def runner_on_error(self, host, msg):

        self.writeline('error_host',host)
        self.writeline('error_host',msg)
        pass

    def runner_on_skipped(self, host, item=None):
        pass

    def runner_on_unreachable(self, host, res):
        pass

    def runner_on_no_hosts(self):
        pass

    def runner_on_async_poll(self, host, res, jid, clock):
        pass

    def runner_on_async_ok(self, host, res, jid):
        pass

    def runner_on_async_failed(self, host, res, jid):
        pass

    def playbook_on_start(self):
        pass

    def playbook_on_notify(self, host, handler):
        pass

    def playbook_on_no_hosts_matched(self):
        pass

    def playbook_on_no_hosts_remaining(self):
        pass

    def playbook_on_task_start(self, name, is_conditional):

        if 'count' in self.results:
            self.results['count'] = self.results['count'] + 1
        else:
             self.results['count'] = 1

        file = os.path.join(self.tmpfolder,str(self.results['count']) + ".out")

        self.current = {}
        self.current['name'] = name
        self.current['file'] = file

        self.writeline('name',name)
        self.writeline('run_count',str(self.results['count']))


        pass

    def playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None):
        pass

    def playbook_on_setup(self):
        pass

    def playbook_on_import_for_host(self, host, imported_file):
        pass

    def playbook_on_not_import_for_host(self, host, missing_file):
        pass

    def playbook_on_play_start(self, pattern):
        if self.init == False:
            self.init_xunit()
            self.init = True


    def init_xunit(self):
        if not os.path.exists(self.tmpfolder):
            os.makedirs(self.tmpfolder)
        else:
            shutil.rmtree(self.tmpfolder)
            os.makedirs(self.tmpfolder)

        filename = "xunit_out.xml"
        output = open(filename, 'w')
        output.write('<?xml version="1.0"?>\n')
        output.write('<testsuites xmlns="http://check.sourceforge.net/ns";>\n')
        d = datetime.datetime.now()
        datetimestring = d.strftime(self.fmt)
        output.write('<datetime>' + datetimestring +'</datetime>\n')
        self.start = d



    def playbook_on_stats(self, stats):

        filename = "xunit_out.xml"
        output = open(filename, 'ab+')

        indent = ""

        files = glob.glob(os.path.join(self.tmpfolder,'*.out'))

        failcount = 0
        testcount = 0
        testname = "ansible_testrunner"
        id = "1"
        results = []

        output.write('  <suite>\n')
        output.write('  <title>' + testname + '</title>\n')

        for filename in files:

            testcount = testcount + 1

            stream = file(filename, 'r')
            obj = yaml.load(stream)

            if 'run_time' in obj:
                runtime = obj['run_time']
            else:
                runtime = 0

            if runtime == 0:
                timestr = "0"
            else:
                timestr = str(runtime)

            errormessage=""
            message = "Not Available"

            if 'stderr' in obj:
                errormessage = obj['stderr']
            elif 'stdout' in obj:
                errormessage = obj['stdout']
            else:
                if 'module_args' in obj:
                    if obj['module_args'] == None:
                        errormessage = "Failed command but no arguments could be found"
                    else:
                        errormessage = "Failed on "  + obj['module_args']
                else:
                    errormessage = "no module args, fatal error must have occurred, check ansible syntax"



            if 'result' in obj:
                if obj['result'] == "failed":
                    result = "failure"
                    message = errormessage
                else:
                    result = "success"
                    message = "Passed"
            else:
                result = "failure"
                message = errormessage

            if message == None:
                message = "Not Available"

            results.append('    <test result="' + result + '">\n')
            results.append('      <path>.</path>\n')
            results.append('      <fn>unknown</fn>\n')
            results.append('      <id>' + obj['name'] +'</id>\n')
            results.append('      <iteration>0</iteration>\n')
            results.append('      <description>NULL</description>\n')
            results.append('      <message>'+message+'</message>\n')
            results.append('    </test>\n')

            stream.close()



        for line in results:
            output.write(line)

        output.write('  </suite>\n')
        d = datetime.datetime.now()

        since_last = self.start - d

        since_last_in_seconds = since_last.days * 24 * 60 * 60 + since_last.seconds
        if since_last_in_seconds == 0:
            durationstring = "0.0"
        else:
            durationstring = str(since_last_in_seconds)
            durationstring = durationstring + ".0"

        output.write('  <duration>'+durationstring + '</duration>\n')
        output.write('</testsuites>\n')
        output.close()


Reply via email to