In a discussion a couple weeks ago with Anarcat, we discussed how to begin getting test coverage in Aegir. There currently isn't much (if any) testing in any of the Aegir components currently, which can lead to regressions that can be hard to identify. Adding testing to the front-end is difficult, as we'd need to simulate the back-end. IIRC, testing any back-end component appears to require tests to be in place for hosting_save, so we'll probably want to start with that.

BTW, to bootstrap this, Koumbit is offering to host a testing sprint in Chicago. Please let us know if you're interested here: http://community.aegirproject.org/node/448.

Otherwise, I propose that we use this issue to coordinate a testing strategy.

Comments

ergonlogic’s picture

Oops! That should have read 'provision-save', rather than 'hosting_save'.

anarcat’s picture

Title: Testing [meta] » unit testing [meta]
Anonymous’s picture

I have built a build test in Jenkins that:

1) Creates a new Debian Squeeze VPS at Rackspace Cloud
2) Installs Aegir on it
3) Builds a Drupal6, Drupal7 and OpenAtrium platform using Drush Make
4) Does a provision-save/provision-install/ import of a site on each platform (with respective install profiles)

It's not entirely comprehensive, but it works, notifies us in IRC on success/fail and even alerts my phone if fails (via http://notifo.com).

http://jenkins.greenbeedigital.com.au:8080/job/aegir%20install/

Will share the code soon, it's based on my Frigg Python script, which talks to Libcloud API, and uses Fabric to run the remote SSH commands to the new machine.

After this I want to do an upgrade build too, to ensure our upgrade script works.

Anonymous’s picture

We now have an 'aegir upgrade' test:

http://jenkins.greenbeedigital.com.au:8080/job/aegir%20upgrade

This one does the following in this order:

1) Provisions a VPS at Rackspace Cloud
2) Installs the last release of Aegir (currently hardcoded to 1.0-rc2)
3) Upgrades it to the latest master branch from git
4) Fetches and builds drupal 6, 7 and openatrium with drush make
5) Installs a site on each platform

exits if any of these steps fail.

This test is probably even more useful for us to run pre-release, to ensure the upgrade path works, and that verifying/installing platforms and sites still works after an upgrade.

ergonlogic’s picture

Just posted an issue (http://drupal.org/node/1128614) to test .deb installs too.

anarcat’s picture

Also note that the drush folks have their own unit testing that we should hook into, that is also bridged with jenkins:

http://ci.drush.ws:8080

anarcat’s picture

I have bit the bullet and become more familiar with PHPUnit, the unit testing library that drush chose.

Basically, the way drush does unit testing right now is through exec() calls, that is it uses the "drush backend" functionality to fire up drush commands and watch the expected results. This is more like functional testing because we do not test one function at a time.

While discovering a performance issue in drush tables displays, I have written a test case that targets a specific functions instead of testing a whole commandfile. See #1132902: performance issues with huge tables. This should probably be splitted in a separate issue so that we clarify Drush 5's API for that kind of stuff.

anarcat’s picture

anarcat’s picture

Another thing we can do here #1194676: factor out current jenkins tests into a provision-test command, that is a fairly low hanging fruit...

Steven Jones’s picture

So I've gone and popped mig5's Jenkins python scripts into a Drupal.org git sandbox:
http://drupal.org/sandbox/darthsteven/1286014
and I've also enhanced the 'stable' release script to accept command line parameters, and then added a wrapper script that Jenkins can call, which means that the stable release checking script that we use to test the release process now longer requires copying the script on the server, you can just change parameters in the Jenkins UI. See an implementation of this here:
http://ci.aegirproject.org/view/Stable%20builds/job/aegir%206.x-1.4%20in...

Also, I've made that job uses a git checkout of the scripts in the job itself, rather than a global scripts directory. This basically means that it should be much easier to make changes to the testing scripts, as you just push the change and rebuild the Jenkins job, and it pulls in the changes to the build scripts.

Steven Jones’s picture

And I'm now doing the same for the dev testing scripts, there's a lot of code that can be shared here, but we can re-factor that later.

ergonlogic’s picture

Title: unit testing [meta] » Testing [meta]
Version: 6.x-0.4-alpha3 » 7.x-2.x-dev

Over in #1261030: [meta] Roadmap: Aegir 3.x (D7 port) we've been discussing some options to test the the full Aegir stack.

While I've been working on some Selenium IDE/RC testing, I actually think Behat is looking like a better solution. Check out the beginnings of a test suite from @joestewart: http://drupal.org/sandbox/joestewart/1955274. According to Behat docs, it already uses Selenium for some front-end testing and should allow for triggering webdriver for Selenium IDE based tests too. Apparently, it also integrates well with PHPunit which, iirc, is the basis for Drush's tests. So we might be able to use Behat (and Mink) for all our testing needs.

That said, I'm certainly not against Simpletests. Especially if we can figure out how to test Aegir's full functionality, which I've always been under the impression that it couldn't do.

Steven Jones’s picture

I think that we could write some PHPUnit tests that test provision in isolation.
Then we can write some Simpletests to the test the Hostmaster frontend in isolation.
And then we can use something else (behat + mink) to write 'full stack' tests too.

helmo’s picture

Version: 7.x-2.x-dev » 7.x-3.x-dev

Moving to 7.x-3.x version tag (7.x-2.x never existed, we went for 6.x-2.x).

helmo’s picture

Issue summary: View changes
Status: Active » Fixed

Jenkins has seen many improvements over the years ...

However recently most effort has gone into ... #2778447: [META] Travis CI Integration for automated testing

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.