Testing these days is thankfully more common place but there is still some confusion in terminology so i'll start by clarifying what I mean by the terms I use later.
Unit Test - A unit test can be run on any computer that can compile your code, it does not need relational databases or other infrastructure like the file system to run or pass.
Integration Test - An integration test does rely on some infrastructure being present. It tests that more than one piece of technology works with another. For example that a repository can persist data to a database.
Acceptance Test - A test that requires the entire application be deployed and tests journey's designed to simulate user activity in the system.
I'm not going to talk about Unit test's as far as i'm concerned they are a known quantity with NServiceBus. Just select your mock object library of choice and start writing your tests. That said the amount of repetitive code you have to write as you mock out handler after handler does seem tiresome.
Our team is trying to automate as much of the testing as is reasonably possible. We want developers to be able to write tests that they can run locally and that will run against our WIP, UAT & RC environments to indicate the state of those environments. Our application is divided up into SOA serivces. So when a developer is working on a feature ticket they will likely only be writing code for a single service.
We started out with the idea that we would write unit tests and acceptance tests and that would give us enough coverage and to be honest that is still true to this day. What has changed is that our codebase has grown and correspondingly the build / deploy / test times have increased as well. So what could we do about it? Well the first thing we did was start rewriting our build scripts to perform tasks in parallel, this process is trickier than we first imagined as dependencies complicated matters. If not done properly it could destabilize our development environment so whilst this will help in the long term it will not solve our problems today. Another thing that happened as our application grew, the amount of resources it took to deploy the entire system onto a single machine also grew. We started to have to upgrade developer workstations to have 16Gb of Ram just to be able to run the entire application locally. After a while it became clear that this approach of deploying the entire application locally was not going to work forever but after some thought it seemed that it did not have to. We had already split our app into SOA services why not just build / deploy / and test the service you were creating a feature for? It turns out that this is not only straight forward but that it has other benefits when it comes to testing.
So now we just have the service that we are working on to worry about instead of taking 15 minutes to build the entire codebase we only spend 90 seconds building one service. When it comes time to deploy the changes we only need to update the endpoints we have modified, locally we deploy a SOA service to just one endpoint this takes only 1 minute. Then we write "integration tests" that simply inject commands and events into our SOA service's input queue that simulate user input and events published by the other services. The benefit of this approach is that simple tests take < 2 seconds to complete and complicated ones take less than 10 seconds. These test times might seem a little bit longer than you would expect the next paragraph explains why.
Testing Asynchronous Messaging Applications
Martin Fowler talks about the highs and lows of testing asynchronous applications and I can confirm from first hand experience that he is correct. If you inject a command into your input queue and then wish to make an assertion against a query that expects that data to be written to a database then you have to poll that database for a reasonable amount of time to check the data. Your test then either passes because the data is returned as expected or the timeout period is reached. This sounds simple and it is but only when you have figured out exactly what you need to poll for. If your system requires that messages be processed in a particular order then your tests must poll to ensure that the preceding message has be processed before you inject a subsequent one. In and ideal world you'll have everything setup so that your system can handle messages in any order but in the real world you may not. What gets even more interesting is when you need to make data modifications to simulate time passing. As soon as your tests start modifying data then they are likely to encounter deadlocks, so you must include retry logic inside your tests if you need to update data like this. We have created a framework that helps make developers a lot more productive it has functionality to poll for a specified interval say every second or it can use a dynamically increasing interval that will check after 250ms, 500ms,1sec, every 5secs this helps reduce test times and shorten feedback. We are using the MbUnit framework which has some nice features for running tests in parallel which can dramatically reduce test run times.
This post is a work in progress but I wanted to put it out there to see what others are doing so if you have anything you like to share or disagree with the above let me know via twitter.