Spiria logo.

Setting a strategy to pay down technical debt

January 15, 2021.

When you take over a project that’s at an advanced stage of development, or even in production, you can expect a few surprises. And while refactoring should be an integral part of any normal development cycle, it’s not always easy to consider the time spent as an investment.

Over the last few months, I have worked on two applications weighed down by heavy technical debt; for one of these applications, we had to resort to a progressive payment strategy, which lends itself well to most development projects.

At first glance, trying to assess the cost of technical debt can seem complex, and you must keep in mind that minimum effort barely allows you to keep up with the interest, meaning that every new feature is like a new loan added to the principal. To make a dent in the principal, you must assess the current debt and revisit some definitions and practices within the team.

What exactly amounts to technical debt? Obsolete libraries or APIs. Known bugs. Performance problems caused by the aforementioned. Every shortcut taken to deliver on time, without actually addressing the problem. They all are part of the overall debt. Does sub-optimal architecture constitute technical debt? Probably not, and architectural changes are the stuff of a whole other article. What about gaps in unit testing? Probably not debt as such, but implementation of these tests will probably be part of the good practices that will keep you from adding to your debt.

Your first step is to dissect the application and to categorize every item of debt in one of three categories. The time required for this analysis will directly correlate with the previous  team’s involvement in the project. Once you’ve analyzed the project, you’ll find that each item of debt will fall in one of three main categories:

  • Items of general debt that can be paid off with no direct consequences on functionalities. They are good candidates for their own story (for example: “update React to the latest version”);
  • Items of general debt that repeat across functionalities. They are good candidates for becoming sub-tasks (for example, “transition classes to functional components”);
  • those items that are story specific and will not be duplicated (typically, bugs).

Before assessing the effort required to pay off each item of debt, you must revisit the concept of completion. When can a story be considered completed, and closed? Before the beginning of QA testing? After QA testing and merging into the main development branch, but before quality control testing? Or after the feature is delivered to users?

In our case, my team found that the most beneficial approach was to consider stories as completed after the successful completion of QA testing and the merging of our code into our development branch. However, the agreement provided that this branch had to be deliverable at any given time, meaning that no incomplete or problematic merge was ever allowed.

During the first grooming, we had to implement a methodology to systematically pay down debt. This methodology proved to be quite simple yet thorough and efficient.

We decided that a maximum of 20% of each sprint would be dedicated to paying down the first category of debt in our list above, i.e. the items of debt that were not likely to create bugs affecting individual functionalities.

For the second category of debt, or items of general debt that repeated across features, we decided on a phased-in approach. Every time we addressed an existing feature, two sub-tasks would automatically be created for the story, the first for code refactoring and the second for unit tests. In most cases, these two sub-tasks accounted for 15 to 30% of the story points.

The third category of debt, feature-specific bugs, would be paid down based on the priorities set out in the backlog.

After three sprints using this approach, the results were conclusive. Each code refactoring allowed us to eliminate many shortcuts, improve modularity and test every line of code, which had a direct impact on ease of maintenance, implementation, reliability and performance. Better yet, the gains were tangible on both sides of the application: for developers and, especially, for users. In the end, the size of the application package was reduced by 70% (from 12MB to 3.5MB), loading time plummeted by 80%, and unit test coverage increased from 2 to 60% (about 380 new tests).

At the outset, this effort, and its cost, were very much an unknown. In the end, however, the benefits far outweighed the investment, even over the long term. It’s sometimes difficult to provide complete transparency about existing debt and to reveal it to the entire team through the backlog, but if you put in metrics for each objective, the gains will be gratifying for all concerned.