Why TDD is severely overhyped (and why you should still try it)

Test-driven development (TDD) is all the hype these days, especially if you frequent LinkedIn and various software development community forums. It’s almost like a religion these days. Many TDD evangelists will keep saying that it’s the best thing since sliced bread. They will try to convince everyone to start using it. Some even go as far as telling you that you aren’t a “good” or even a “real” developer if you are not using it.

But is TDD as good as some will want you to believe? My answer will be no. I will shortly explain to you why. But before I do, let’s remind ourselves what TDD is. After all, not everyone who uses the acronym properly understands what it means. But before we discuss whether TDD is useful or not, we need to at least understand what it is.

What is TDD

So, let’s first get two somewhat popular misconceptions about TDD out of the way:

  • TDD is not the same as having automated tests. You can have automated tests and even have a high test coverage without doing TDD.
  • TDD is not just about writing tests before writing code. Well, this is an essential component of TDD, but it’s not the only component.

TDD process goes through three distinct stages, which are known as red, green, and refactor. Let’s go through the summary of each of these.

Red stage: all tests fail

Some TDD puritans will object to seeing “tests” as a plural in the title. But I’ll get to that in a minute.

Imagine that we have gathered the requirements and we now fully understand what the software we are writing must do. But before starting to write the software, we write the tests that validate the requirements.

The red stage of TDD is completed when we have written automated tests that provide sufficient coverage of a specific piece of functionality. While writing these tests, we must include every scenario we can think of, including edge cases and boundary conditions.

While the tests need to be completed, we still don’t have the code. It is acceptable to have some stub methods that return some default value or throw an exception when invoked. Some TDD puritans would say that even the stub methods should not exist by this stage, but if you don’t write them, you will just make life difficult for yourself. You will be fighting against your code editor. It will not be able to use the autocomplete feature and will keep highlighting the compilation errors.

One important point of this stage is that your tests must fail. The idea is that if your tests don’t fail while no implementation exists, then it may be a sign that your tests aren’t reliable and aren’t safe from false positives.

Greed stage: all tests pass

Once we have written the tests and validated that they fail when no implementation is present, it’s time to write the implementation. What we shouldn’t worry about, however, is that our implementation is clean. We just need to come up with any solution that will make the tests pass and that we can arrive at quickly.

The main purpose of this stage is to implement the required behavior and use the tests to validate that the behavior is exactly what we expect. But we aren’t done yet. The next stage is also very important.

Refactor stage: code looks good

Now we know that we have the behavior that we want since all of our tests are passing. It’s time to refactor our code to make it more readable and maintainable.

Perhaps we will find that some bits of the functionality can be shared by other parts of our app, so we can move them to a shared class or a library. The tests that are already covering the other parts of the app will validate that we haven’t broken those by moving the functionality around.

Other parts of logic may need to be decoupled into separate functions or methods. They may be moved to private methods of the class under tests, or they may even be moved to separate classes.

The purpose of this step is three-fold:

  • Firstly, we are ensuring that our code adheres to the best practices and is readable and maintainable.
  • Secondly, we are decoupling our code and outlining clear boundaries between different units of functionality.
  • Thirdly, this step validates that our tests are robust enough to deal with any type of refactoring, which is the process of making code better without any changes in behavior. After all, if the behavior didn’t change, then the tests that validate the behavior still must pass.

There is also a “puritan” approach to doing TDD where instead of performing these steps per a piece of functionality, you would do it for a specific condition or state within that functionality. So you would only write a single test and do all three stages of the process every time you have to add a new state or a condition. Then you would add another state or condition and do the process all over again. And so on until the piece of functionality is finished.

Some people prefer to do it this way, but for many others, it just forces you to do a lot of unnecessary rework. In any case, the people who invented TDD are OK with either approach, as long as the scope of each change doesn’t go beyond a single unit of functionality. In my opinion, people that are too dogmatic about these kinds of details tend to focus too much on “how” and don’t think enough about “why”.

Of course, this is only a simplified summary of what TDD is. We can spend a lot of time talking about all the nuances of these three steps. Whole books are written on this subject. But it still gives us a clear enough picture of what TDD is.

The biggest benefits of TDD

For those who are new to TDD, the process may sound tedious and counterintuitive. So, let’s stop for a brief moment and talk about the benefits that it brings about.

It can help you not to forget to write the tests

Having automated tests with sufficient coverage makes our lives easier. And one of the biggest benefits of TDD is that it helps you not to forget to write the tests. If we write tests after writing the code, we may just move on to something else after writing the code and neglect to write the tests.

This feature is especially useful for developers who are new to automation testing. But this is much less important for experienced developers who already developed a habit of always writing tests wherever they can.

It can help you to learn a proper way of doing testing

Another way in which TDD is useful is that it teaches developers how to write good tests. Many developers abandon TDD after some time, but the good practices that they learned while practicing TDD stay with them forever. For example, this is what a LinkedIn user commented under a post that discussed the usefulness of TDD:

Some time ago, after a TDD training, we started to follow the TDD paradigm for a while. But for various reasons we left this path again. After all, we still write many more unit tests than before the training and also pay more attention to coverage.

It can reduce the amount of rework

If we write implementation and then write tests, sometimes we find that the code that we have written isn’t easily testable. This forces us to redo some of our code. Maybe we already started writing the tests only to find out that some parts of it may not be so useful with the current state of the code. Therefore we may have to rework both the test and the code.

TDD can help us to avoid this. Because we have written the tests first and we know what the output behavior should be, we just now have to write code that exhibits this behavior.

However, once again, this should not be a big problem for a developer who is experienced in writing well-designed and well-testable code. Such a person would already know what makes the code testable, so he or she has a good chance of getting it right the first time.

Also, this doesn’t imply that TDD will guarantee that there will be no rework and it will definitely not guarantee that the development process will be quicker. Some rework is actually built into the process. Remember the refactor stage?

It can help with separating functionality boundaries

When we write unit tests, a good practice is to test per unit of functionality and not per unit of implementation. The difference between these two concepts is as follows:

  • A unit of implementation is a single method or a function.
  • A unit of functionality is a single procedure that may involve multiple units of implementation working together. But this is still an atomic piece of behavior that wouldn’t add value if we try to take it apart.

For example, getting transformed data from the database via an API endpoint may be an atomic unit of functionality. While it involves several method invocations (invoking the API method, making the database query, transforming the data into a consumable format, etc.), there wouldn’t be much usefulness left if any of the invocations are removed.

While the boundaries between the units of implementation are obvious, the boundaries between the units of functionality are fuzzy and not very well defined. However, if we define the expected behavior first, it may help us to structure our code in such a way that these boundaries are much more obvious. This, in turn, helps us not to accidentally affect other parts of the application while modifying the specific behavior.

It can help you to separate the abstraction from the implementation

If we start writing code right away, we sometimes go back and forth in determining which parameters in our methods are necessary and which ones we can live without. This, in turn, may lead to unnecessary rework. And as deadlines approach, we may not even end up with the ideal shape of a method signature.

TDD may help us to avoid this problem. Because we initially just define the signature of a function or the method that we want to test, it helps us to decide what the abstraction should be. In this case, abstraction is the surface area of an object that we interact with, such as the signatures of its methods. The implementation, on the other hand, is the logic that runs inside these methods.

Once again, it’s less of an issue for developers who are well-versed in coding best practices and have a long experience in writing clean code. They tend to be capable of intuitively separating the abstraction from the implementation in the right way.

It can help you to validate the quality of your tests

It’s good to have good test coverage in our code, but how do we know that our tests are useful? The tests might be there, but are they validating the right things?

TDD has an inbuilt feature to somewhat mitigate against this. Remember the “red” stage? This stage exists specifically for this purpose. If our tests fail when there is no implementation, at the very least we know that they don’t just randomly pass.

Of course, it’s wrong to assume that tests that fail when there is no implementation would guarantee that the tests are of high quality. Also, there are other ways of validating the quality of tests, such as pair programming while writing tests or doing mutation testing. But it’s worth mentioning that TDD has an inbuilt step that provides some of this capability.


These are the main benefits of following the TDD process while developing your application. Please note that the titles of all of these have the “can help” phrase. It means exactly that. TDD doesn’t guarantee that it will deliver these benefits. Nor does it guarantee that these things cannot be achieved without TDD.

Unfortunately, since there are many misconceptions and hype about TDD flying around the internet, many people came to believe that TDD is absolutely essential and nothing can be done without it.

My name is TDD and I came to save thee

The biggest misconceptions about TDD

Let’s now examine some of the most popular misconceptions about TDD and address them one by one. Knowing why these things aren’t necessarily true may save you a lot of headaches in the future.

It prevents bugs

Indirectly, TDD will minimize bugs. After all, good test coverage is guaranteed to minimize bugs and, as we have already seen, TDD helps us to enable good test coverage by helping us not to forget to write the tests and make those tests of a reasonable quality.

However, at best, TDD will only help us to do so indirectly. As we already saw, both good test quality and good test coverage can be done without following the TDD process. Also, while test coverage will minimize the number of bugs, it will not prevent them.

Anyone who has ever worked on a real-life production system will know that many (if not most) bugs are things that wouldn’t have ever been caught by good test coverage. For example, you may have a perfect test that validates that the system will correctly record the transaction. You may even have all possible edge cases and boundary conditions accounted for. But when the app gets deployed, you soon find out that it just cannot cope with the amount of data it has to deal with in the real world.

TDD is the only way to write good tests

While the “red” stage of TDD is indeed designed to somewhat validate the quality of your tests, it’s far from being fool-proof and far from being the only way. One alternative to it is to do the so-called “mutation testing”. This is when you make random changes to the code, deliberately trying to break it and verify that your tests can pick those up.

You can do mutation testing to supplement TDD. But you don’t need TDD to do mutation testing. Not to say that something as simple as the visual examination of your tests is often more than sufficient.

Everyone will benefit from it

This idea assumes that all humans are wired in the exactly same way and neurodivergent people don’t exist. Yes, TDD has some benefits, as we saw previously. And yes, for some people, TDD will help to write clean code. But not all people are wired this way.

Some of us are just more artistic. We need to “play” with the code before we can clearly define the idea of what it’s supposed to do. The idea comes up when we try different things. And then we can start thinking of how this code will fit into the bigger picture.

Because we are all wired differently, TDD will not benefit everyone. For some people, it will introduce an unnecessary obstacle. If you enforce TDD in your organization, you may even force some of your more creatively-minded developers to head for the exit, as their work will no longer be enjoyable.

You must use it in every situation

There are TDD evangelists that will insist that there must not ever be any code ever written before the tests are written. They insist that TDD must always be used in every situation. Sometimes, they even say that well-known industry influencers, like Robert Cecil Martin (commonly known as Uncle Bob), say this.

But this looks to me like a game of broken telephone. Here is what Uncle Bob actually said about TDD:

TDD is a discipline. Like all disciplines it applies well in some circumstances and less well in others. Every developer should know and practice this discipline so that when it is called for they can use it to their advantage.

Robert Cecil Martin (aka Uncle Bob)

It will always give you 100% code coverage

For those who have never heard the term “code coverage” before, it represents the percentage of the code that gets executed by running the tests in your primary test suite. Therefore, 100% code coverage implies that your test suite was able to invoke all lines of your code one way or another.

Following on from the previous misconception, some people will say that TDD will always give you 100% code coverage. The idea is that you won’t have to even aim to achieve 100% code coverage. TDD is supposed to give it to you automatically.

But in reality, this will only ever happen if you treat TDD as a religion and not as a tool. 100% coverage implies that you are using TDD everywhere, even when using it doesn’t give you any tangible benefits. In fact, if I see a code base with 100% code coverage, to me it’s a fairly reliable sign that there are some tests that aren’t doing anything even remotely useful.

Some people will object. “But if you have less than 100% coverage, then some of your code remains untested”. While I agree that all of your code should be tested, this doesn’t necessarily imply that you should have 100% code coverage.

Some code can only be tested in a meaningful way if you are doing end-to-end tests against a deployed application and not against the raw codebase. User interfaces, interactions with external services, and various background processes are examples of such code. But those types of tests will not be included in your primary test suit that actually measures the code coverage metrics.

The primary test suite consists of unit tests that run against the raw code base. You may also get away with some integration tests that are limited in scope. But since end-to-end tests require a deployed application running in an environment that mimics the production environment as closely as it can, you cannot have them as part of your primary test suite.

Yes, you can still have unit tests against those objects. But these tests will be useless at best and they will give you a false sense of confidence at worst.

It doesn’t come with any tradeoffs

Sometimes you will hear statements along the lines of the following:

When you start doing TDD, there are tradeoffs. But when you master it, there are no tradeoffs.

This is just used as an axiom that is supposed to be self-evident and doesn’t require any proof. But is there any proper peer-reviewed research that would validate this statement? Well, fortunately, there is.

There was a study published in the International Journal of Applied Engineering Research that investigated the pros and cons of using TDD compared with writing tests postfactum. It did find that TDD, on average, would reduce the number of bugs. But it also found that it came with tradeoffs that were significant enough that any benefits of TDD were almost canceled out.

Another peer-reviewed study published by the Journal of Systems and Software showed TDD produced even more disappointing results for TDD. It found that TDD would result in more test coverage on average. However, it also showed that this process would take significantly more time than writing tests postfactum and that the outcome in terms of software quality was hardly different.

It will guarantee that your code will be clean and loosely coupled

No, it won’t. It may help you somewhat to define the boundaries between different pieces of functionality, as we already discussed. But it won’t automatically guarantee that you will end up with nicely written and loosely coupled code.

In fact, if a developer knows how to do TDD but doesn’t know any other best practices, like SOLID principles and design patterns, it’s guaranteed that TDD will not lead to clean and loosely coupled code. Yes, the developer will still try to refactor to the best of their ability. But relying on a subjective view of what the clean code should look like is always much worse than following the tried and tested best practices.

You cannot do TDD effectively unless you know the principles of clean code. On the other hand, if you already know those, TDD might be less useful.

Without it, you cannot properly design your APIs

Again, if TDD can help you with decoupling the abstraction from the implementation, it doesn’t mean that you cannot do it without TDD. An experienced developer who has written many APIs and implementations will have absolutely no problem doing it, with or without TDD.

It is necessary if you need quick feedback

While it’s true that TDD gives you quick feedback, it’s not the only way to receive it. Firstly, for proper feedback, you need both the test and the implementation. It doesn’t matter as much which of them was written first. Secondly, in some situations, you need to receive feedback on some things that the primary test suite cannot cover.

For example, you might be writing something that relies on the read-only version of the production data and cannot be meaningfully tested without it. In this case, it might be better to launch the whole application and do an end-to-end test on it.

You cannot have sufficient test coverage without it

While TDD helps you to remember to write tests, it doesn’t mean that you will forget to write them without TDD. For an experienced developer who has written many tests, providing sufficient test coverage would be a habit, regardless of whether or not these tests are written before or after the implementation.

Everyone who tried it ended up liking it

This is demonstrably not true. There are plenty of teams and individual developers that have tried TDD and ended up abandoning it for one reason or another. For example, here is what a LinkedIn user commented under a post that discussed the benefits of TDD:

I have found, on the rare teams that “tried” TDD that the experience level was not there and it soon went back to business as usual.

I think it has promise but I also think it requires a higher degree of precision that most teams will never provide.

Another user said the following under the same post:

TDD was really transformative for me in how I thought about the code I was writing early in my career, and I think that was helpful. At this point I rarely actually practice it though, and I value integration and E2E tests FAR more than unit tests, which pretty much always happens after.

So yes. Some teams and individuals try it, find that it doesn’t fit them, and abandon it.

It requires less discipline than writing tests postfactum

Not neglecting to write tests requires discipline. But so does TDD. Writing tests before you have any implementation is just not very natural. You have to use quite a bit of willpower when you start doing it. It only stops requiring so much discipline once it became a habit.

On the other hand, writing tests postfactum also becomes a habit when practiced enough. Yes, we may initially think that some piece of functionality is very simple and it doesn’t matter if we write tests against it or not. But if we keep forcing ourselves to write tests and think about appropriate coverage while doing so, it will soon become automatic and we will just keep adding coverage without thinking.

So, both TDD and writing tests postfactum initially require quite a bit of discipline. And both of them can become habitual activities over time that no longer require any discipline.

Wrapping up

TDD is just a tool. And, as a tool, it has its obvious benefits. Also, as a tool, it applies to some situations and not others. After all, you wouldn’t use a power drill where using a hammer is more appropriate, would you?

So TDD is definitely worth trying. It may indeed improve your development process. Or maybe it won’t. You can still discard it if it doesn’t meet your expectations.

But TDD definitely isn’t a religion. Don’t let anyone tell you that if you aren’t doing TDD, then you aren’t a good developer. Many good developers tried TDD and discarded it afterwards. There are many good developers who never even tried it because they didn’t find its promises convincing enough. And they still produce good-quality code that we rely on every day.

What you definitely shouldn’t do as a developer, however, is allow yourself to be dogmatic. Construction workers don’t argue which tool is better. They just do their job and use each tool when appropriate. And so should you.


P.S. If you want me to help you improve your software development skills, you can check out my courses and my books. You can also book me for one-on-one mentorship.