Codementor Events

Test-Driven Workflow in Two Easy Steps

Published Aug 29, 2017Last updated Sep 01, 2017
Test-Driven Workflow in Two Easy Steps

Introduction

During my career I have introduced multiple teams to Test-Driven Development workflows. I've found that in most cases, teams initially struggle to understand both the overall TDD workflow and particularly how to get started. I wrote this article in hopes of helping other engineers get a clearer picture of the test-driven workflow and easing some of the pain of adopting TDD.

The Rules of TDD

On its surface the TDD workflow is deceptively simple, consisting of three basic rules:

  1. You are not allowed to write any production code unless it is to make a failing unit test pass.
  2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

Despite their simplicity, the rules imply some subtle points as well:

  • Rule 1 implies that tests are written first, sometimes before the production code even exists. Because the tests are the initial focus, they are said to 'drive' the implementation.

  • Rule 2 maintains the emphasis on tests, but introduces the minimalist aspect of TDD: No matter which stage you're at, write the minimum required code to get to the next step. Because compilation failures count as failing tests, unit tests often begin life as non-compiling code that references a nonexistent class or member. This minimalist approach implies that TDD is focused on making small, incremental changes that add up to a final implementation.

  • Rule 3 extends the TDD minimalism concept to our production code, where we write the minimum required code to make the failing test pass (or code compile). For example, initial implementations will often return hardcoded results to satisfy the test, then evolve with the test coverage to provide a final implementation.

Two Steps to TDD:

Although there are 3 rules, in practice TDD generally consists of two steps repeated over and over:

Step 1: Introduce a Test Failure

TDD always starts with a failing test of some kind. The nature of the failure will be dependent on what you are testing and where you are in the TDD process, but it's all about the failure at this point. Common test failure points are non-compiling code (perhaps because the class or method being tested doesn't exist yet), failing assertions, or incomplete implementations.

Rules 1 and 2 guide us here, saying that we have to make the smallest possible change to our test that will introduce a failure - in other words, don't write a complex test up front, let your tests and production code evolve in step with one another.

Step 2: Write Minimal Code Required to Pass the Test

Once we have our failing test, we need to get back into the "Everything Passes" state by updating our production code. In the same spirit as rule 2, rule 3 guides us to write as little code as possible to make the failing test pass. During initial stages, this may consist of such simple things as empty method bodies or returning hardcoded values. In later stages (i.e. when using Mocks to test behavior), this leads us to our actual production implementation. Once the test passes, we return to step 1 and the process continues.

To help illustrate this workflow, let's implement a super-basic calculator class that knows how to correctly add two numbers together. We'll start from ground zero, with no test or production code in place, then repeat the steps until we have a correctly functioning implementation.

Side Note: If Wishes Were Horses

I often think of the TDD workflow more in the sense of "Wish-Driven Development" - I write a test explaining what I wish the software did, and then write production code to satisfy that wish.

Walkthrough: TDD Workflow From Scratch

Since the focus in TDD is on the tests, we start by creating a completely empty test class. I usually name it after the class being tested, thus we'll call it CalculatorTests.

[TestFixture]
public class CalculatorTests{
    // Tests will go here.
    // At this point, we do not have a Calculator class to test, so
    // this test would pass (because there are no tests written).
}

Technically, since there are no tests in this class (indeed there's no implementation at all), if we attempted to run this test suite, it would pass - because there's nothing there to fail. If we want to proceed to Step 2, we need to introduce some kind of failure. At this point, we know that we want a Calculator to run tests against, so we update the test class like so:

[TestFixture]
public class CalculatorTests{
    // At this point, we do not have a Calculator class to test,
    // so this line will not compile.
    // 
    // This fulfills the "Compilation failures are equivalent to test failures" portion
    // of TDD Rule #2.
    //
    // At this point, "I wish my code would compile..."
    Calculator SystemUnderTest;
}

Since the Calculator class doesn't exist yet, our code does not compile and we can move on to Step 2, implementing as little production code as possible to return to a compilable state:

// Note: No changes to CalculatorTests in this step - Step 2 is all about the production code.

public class Calculator{
    // No functionality here yet - in TDD, we can't write that until we have a 
    // test for it.
    //
    // "Wish granted.  Your code compiles."
}

Now that we're back in a compilable state, we return to Step 1, introducing a test failure. At this point, we know that we want a method that can add two numbers and return a result, so let's stub out a test method:

[TestFixture]
public class CalculatorTests{
    Calculator SystemUnderTest;

    [Test]
    public void Add_Always_ReturnsExpectedResult(){
        // Calculator does not have an Add method at this point, so this will not compile.
        // Therefore we can move on and write the minimum production code required
        // to make it compile.
        //
        // "I wish Calculator had an Add method that takes two integers and returns a value
        // so my code would compile."
        var result = SystemUnderTest.Add(1,1);
    }
}

As before, we're wishing for something that doesn't exist yet, so we have a compilation failure and we can head back to Step 2.

public class Calculator{
    // Stub out an Add method with the minimum required functionality
    // so the code will compile again (and fulfill our wish)
    public int Add(int lhs, int rhs){
        // Yes, we are returning a hardcoded result now.  At this point, we write the _minimum_ code
        // required to allow compilation.
        //
        // "Wish granted.  Calculator has an Add method that takes two integer inputs 
        // and returns an integer output."
        return 0;
    }
}

It's important to notice here that we've again written the bare minimum code required to return to a compilable state. At this point we don't care that Add() doesn't do any work or that the only time its return value would be correct is for an input of (0,0). All we care about right now is that the code compiles so we can go back to Step 1 and break stuff again.

Now we're one step closer, but if we run the test it will fail with a NullReferenceException because although we declared a SystemUnderTest, we never actually instantiated one. This is an instance of a failure that is caused on the test side instead of the production side, but Rule 3 still applies - We must do the minimum possible to return the code to a state where all tests pass:

[TestFixture]
public class CalculatorTests{
    Calculator SystemUnderTest;

    // In NUnit, the SetUp method runs before each test, so we'll have a fresh
    // instance of Calculator for each test case (including parameterized tests.)
    [SetUp]
    public void SetUp(){
        // "Wish granted.  SystemUnderTest is a fresh Calculator instance."
        SystemUnderTest = new Calculator();
    }

    [Test]
    public void Add_Always_ReturnsExpectedResult(){
        // The code will compile, but fail with a NullReferenceException
        // because we're not creating an instance of SystemUnderTest
        // yet.
        //
        // "I wish I had an instance of Calculator to test."
        var result = SystemUnderTest.Add(1,1);
    }
}

At this point, the basic test infrastructure (i.e. the TestFixture, SystemUnderTest, and a test for the Add method) is in place so we can move our focus to the actual implementation details. Let's add an assertion to our test to verify that Add returns the correct value for an input of (1,1):

[TestFixture]
public class CalculatorTests{
    Calculator SystemUnderTest;

    [SetUp]
    public void SetUp(){
        SystemUnderTest = new Calculator();
    }

    [Test]
    public void Add_Always_ReturnsExpectedResult(){
        var result = SystemUnderTest.Add(1,1);
        // Calculator currently returns 0 no matter what arguments are
        // passed to it.  As a result, this assertion will fail.
        //
        // This fulfills rule #1, so now we move back to writing
        // production code...
        //
        // "I wish that the Add method returned 2 instead of 0."
        Assert.That(result, Is.EqualTo(2));
    }
}

Our test now fails because Calculator.Add() returns 0 for all inputs, so on to Step 2 again!

Continuing in the spirit of minimal implementation, we will update our production code to simply return the expected value of 2 instead of 0:

public class Calculator{
    public int Add(int lhs, int rhs){
        // Yes, all we did was update the return value.  This is the _minimum_ code
        // required to allow the test to pass and fulfill our wish.
        // "Wish granted.  Here's a 2."
        return 2;
    }
}

This will fulfill the one specific test case we currently have defined, so we can proceed back to breaking things in Step 1.

Looking at the test above, it might seem like we're just playing a game of cat and mouse between the expected return value and the actual return value. Theoretically we could write another separate test method to test another case (perhaps 4 + 4 = 8), but that rapidly leads to large test classes with lots of duplicate code. A better approach is to use parameterized tests to execute a given test multiple times with varying inputs and outputs, as we do below.

Ok, so our test passes for one very specific test case, but won't work for any inputs other than (1,1), so it's not a very effective calculator (or robust test suite) just yet. Let's refactor our test a bit to introduce some parameters, which will allow us to test multiple inputs and verify that the results are correct in each scenario:

[TestFixture]
public class CalculatorTests{
    Calculator SystemUnderTest;

    [SetUp]
    public void SetUp(){
        SystemUnderTest = new Calculator();
    }

    // NUnit's TestCaseAttribute allows us to run a test multiple times with different inputs
    // and expected outcomes.
    //
    // The parameters given to the TestCaseAttribute correspond to the parameters on the
    // TestMethod.
    [TestCase(1,1,2)] // This test case will already pass because of the hardcoded return value
    [TestCase(2,2,4)] // This test case will currently fail because Add(2,2) does not return 4.
    public void Add_Always_ReturnsExpectedResult(int lhs, int rhs, int expectedResult){
        var result = SystemUnderTest.Add(lhs,rhs);
        // Calculator currently returns 2 no matter what arguments are
        // passed to it.  As a result, this assertion will fail for the second TestCase.
        //
        // "I wish that Add would return the correct result no matter what 
        // parameters are passed to it."
        Assert.That(result, Is.EqualTo(expectedResult));
    }
}

Now when we run our test suite, we have one passing scenario and one failing, which guides us back to the production class to update the implementation so the test will pass:

public class Calculator{
    public int Add(int lhs, int rhs){
        // Now we finally do a working implementation, because we have test cases that force us
        // to do so in order to fulfill all our wishes.
        //
        // "Wish granted, I will perform an addition operation on the 
        // specified arguments and return the result."
        return lhs + rhs;
    }
}

And now, we've finally got a functional adding calculator, implemented via a TDD process so it has comprehensive test coverage. We can easily extend the test coverage to verify other scenarios, for example adding two negative numbers, or negative to positive (Admittedly these are trivial examples - the point is that once the infrastructure is in place, coverage can often be expanded with minimal work).

It's true that at this level of detail TDD looks like an awful lot of time-consuming work. Honestly though, once you're comfortable with the general flow, it goes very quickly; For example, the whole process outlined above from no code whatsoever to a simple but functioning method with corresponding test coverage took me about 10 minutes, mostly on the explanatory comments.

Rinse and Repeat

So now that we've seen the two-part flow of Test-Driven Development, it should be clear how the tests "drive" the implementation: The whole flow starts with a minimally broken test, followed by a minimal implementation to return the test to a passing state, then back to breaking tests again. For example, imagine that we want to extend the calculator to support subtraction too - following the same pattern as before, it would go like this:

  • Create a test for the Subtract method
    • This would also live in CalculatorTests, maybe called Subtract_Always_ReturnsExpectedResult...
    • The test should not have any assertions yet, it just attempts to invoke Subtract()
    • The test should not be parameterized yet - that comes later.
    • Since Subtract() does not exist, compile fails, so we can now move to step 2
  • Stub out a minimal Subtract method on Calculator
    • The method should return a hardcoded value regardless of input, I recommend 0 as a placeholder.
    • Since the code compiles now, the test will pass and we can move back to step 1
  • Add a failing assertion to the Subtract test
    • Add an assertion to verify the result of the calculation. The assertion should fail because Subtract() does not return the correct value
    • Since we're back in a failing state, move on to step 2
  • Update the Subtract method to return the correct expected answer
    • We're still returning a hardcoded value, it just happens to be exactly what the test expects.
    • The test passes, return to step 1
  • Parameterize the Subtract test to provide an additional failing test case
    • Take your existing test case and use those inputs and expected output as one test case
    • Add another test case that fails
    • We're back in a failing state, so move to step 2
  • Update the Subtract method to correctly calculate value based on inputs
    • This is where your concrete implementation begins (and in simple cases like this one, ends too...)
    • At this point, no matter what test cases we add to the Subtract test method, the test will continue to pass - you can prove it by adding new test cases as needed.

Conclusion

As you can see, the TDD process itself is relatively simple, consisting of two repeated steps: Introduce a test failure, then write only enough code to make the test pass. More subtle is TDD's focus on making minimal changes to test and production classes, ensuring that each change is tiny and atomic, but constantly progressing towards the full implementation.

When starting out, it may seem like overkill to work in such tiny, granular steps. In practice, however, I have found that the back-and-forth flow between test and production code helps me focus on small aspects of my implementation and quickly implement new features.

Next Steps: Thinking About "Code Flows"...

In more complex scenarios it also becomes important to think about different "Flows" through the code. Often these flows are characterized as "Happy" (i.e. everything went as expected) vs. "Sad" (something went wrong, but we expected it.)

For example, imagine if we wanted to also implement division on our calculator. We would have a test very similar to the Add and Subtract method, parameterized to provide multiple coverage paths, all that good stuff - this would cover our "Happy" flow. Division, however, doesn't like when we try to divide by 0, so this is a possible error flow (.NET would throw a DivideByZeroException). To cover this type of scenario, we would normally have a separate test to verify that the expected exception is thrown.

Want to Get Hands-On?

Check out these other CodeMentor articles I've written about Test-Driven Development!

Prefer a 1:1 Approach?

Contact me on CodeMentor to schedule a 1:1 Screensharing session to get you up to speed quickly!

Get help on Codementor

Discover and read more posts from Ed Mays
get started