Codementor Events

Testing isn't everything, but it's important

Published Feb 12, 2019

A talk that I've been thinking about for the last little while is one by Gary Bernhardt called Ideology. I highly recommend that you go watch it now if you like, I'll wait here. Otherwise, here's the tl;dr: dynamic typing enthusiasts like to say that they don't need a compiler since they have tests; static typing enthusiasts like to say they don't need tests since they have a compiler.

In my experience the former statement is much more common than the latter, it's rare to hear people who like static types say that they don't need unit tests. Maybe in the past things were different; at one point in history Google engineers claimed that unit tests were only necessary for bad coders, they've since been proven wrong and now Google heavily enforces testing for all their code.
What I do hear on occasion though by enthusiasts for languages like Node.js, Python, or Ruby is that they don't really need the safety and structure provided by a compiler since they have unit test suites that catch the bugs.

That's a harmful philosophy. It's not terrible in some cases when bugs don't really matter: in early-stage startups bugs have a limited impact because you either don't have customers at all or the customers are tolerant of issues. You can afford to have bugs slip through since the benefit of being able to move quickly and change things outweighs the cost of shipping bugs.
Unfortunately this changes as the startup grows, and buggy code and regression bugs lead directly to customer churn. It's compounded because as software grows in complexity and teams grow in size, the probability of bugs being created increases very quickly. Unless you do something about it, teams end up getting into a position where they are just fighting fires instead of building product.

"But if you had a good test suite, this wouldn't happen!" claim many developers. Isn't testing good enough and will catch my bugs if they are there? There's many ways that bugs can slip past tests, here's a few examples. Every single one of these I've seen in a professional setting, including at Google which has very rigourous standards for coding:

  • Tests that don't test anything. A test can execute the code and get 100% code coverage, but if it doesn't check for the right conditions then it's not a good test.
  • Nobody wrote them in the first place. Sometimes when the pressure to fix something is on, people skip over writing the tests. Many companies have a culture that focuses on shipping as quickly as possible, which is not conducive to writing reliable software.
  • Not checking important cases. I've seen tests that just check the happy path, without checking any of the error scenarios - what if you can't connect to the database, or the connection drops? What if the file you're trying to read isn't there?
  • Integration/functional tests. By their nature integration tests are harder to write and to maintain, and the combination of inputs explodes exponentially so it's very rare to have a solid integration test suite outside of absolutely critical projects.
  • Nobody ran the tests or looked at the results before pushing. This happens in settings that don't have any processes in place (I'm looking at you, startup hackers) where failing tests block commits/merges/pushes.

There are a few solutions to these. Some are pretty easy and widespread: code review, continuous integration, precommit hooks, and strict permissions on master branches are just a few.

A deeper solution that reduces a lot of headaches is to use some sort of type hints. The reason I recommend this is that if you're using it consistently, it removes a lot of the human error from the equation - the compiler will always remember to check everything, where even the best humans will occasionally slip up and forget something. The compiler is also much faster at finding bugs, where running integration test suites can take a very long time - as a rule of thumb I don't block commits on integration tests, otherwise it can take hours to make changes. It's also automatic: you don't have to write the tests. Less work is a plus in my books.

But do I have to use one of those annoying languages like C++ or Java to do this, where I spend all my time trying to satisfy a compiler rather than doing real work? Nope! You can use this in a Node.js with TypeScript or Google's Closure compiler, in Python you can use Python 3's type declarations, and in Ruby you can use something like RDL (disclaimer, I've never used it). There are also modern statically-typed languages like Go, Rust, Swift, and Kotlin that are much more pleasant to use than previous generations.

For communicating between different systems you can use JSON with JSON Schema or Protocol Buffers (which can be serialized as JSON if you need to). A horror story from the past: my team was building a distributed system with Node.js and Python, and the different nodes in the system would pass messages via JSON. Someone made a change to one system that changed the type of a field in one of the messages, but it was several hops before something actually tried to access that field. The system that tried to access it obviously crashed, and tracking down where that field had been modified took a fair bit of detective work (in addition to causing a production outage).
Fortunately for us the crash was happening in a Python system so we had a KeyError and stack trace to at least have an idea on what was going on, if it had happened in one of the Node.js systems it would have just produced undefined and continued on its way.
If we had used some sort of type checks on messages, this debugging exercise would have been much simpler.

I highly recommend that if you're doing any sort of back-end or infrastructure development and you're not already using some of these techniques, you get started. It'll save you a fair bit of headache later on.

Discover and read more posts from Rob Britton
get started
post commentsBe the first to share your opinion
barlow jenkins
24 days ago

@basket random

The author highlights that dynamic typing enthusiasts often claim that they don’t need a compiler since they have tests, while static typing enthusiasts argue that they don’t need tests since they have a compiler. The article suggests that the former statement is more common, and it is rare to hear static typing enthusiasts say they don’t need unit tests.

Terry Reedy
5 years ago

I only comment on Python. In 20+ years, I have never seen anyone write that a Python compiler is not needed. It would be absurd. I also cannot remember anyone objecting to optional automated static analysis by linters.

There has been disagreement over the need for automated static type checking (a form of testing) of expressions and function calls in addition to runtime checks. Strictly speaking, with enough runtime checks and separate tests, static checking is not needed. But it is also easier to define type limitations with annotations than with argument type checking code, so for some projects, static checking is pragmatically needed. So optional annotation and automatec checking of types, by a program other than the compiler, have been added.

Rob Britton
5 years ago

Yep that’s true about Python people, when I think about it they’re generally receptive to tools that make their lives easier. Most of the arguments I’ve had of this flavour have been with JS or Ruby fans rather than Python ones.

To be fair, a lot of the magic that happens in Ruby libraries would be very difficult with static types, so that’s probably why they oppose it. JS though, I have no idea.

Show more replies