services
A holistic approach that accelerates your current vision while also making you future-proof. We help you face the future fluidically.
Digital Engineering

Value-driven and technology savvy. We future-proof your business.

Intelligent Enterprise
Helping you master your critical business applications, empowering your business to thrive.
Experience and Design
Harness the power of design to drive a whole new level of success.
Events and Webinars
Our Event Series
Featured Event
22 - 24 Jan
Booth #SA31 | ExCel, London
Our Latest Talk
By Kanchan Ray, Dr. Sudipta Seal
video icon 60 mins
About
nagarro
Discover more about us,
an outstanding digital
solutions developer and a
great place to work in.
Investor
relations
Financial information,
governance, reports,
announcements, and
investor events.
News &
press releases
Catch up to what we are
doing, and what people
are talking about.
Caring &
sustainability
We care for our world.
Learn about our
initiatives.

Fluidic
Enterprise

Beyond agility, the convergence of technology and human ingenuity.
talk to us
Welcome to digital product engineering
Thanks for your interest. How can we help?
 
 
Author
Meetu Gujral
Meetu Gujral
connect

Over the span of my decade-long tryst with the agile ways of working as a developer, technical lead, scrum master, and coach, one question has confronted me the most: Should we account defects as part of velocity or not? In this blog, I will summarize the different perspectives and suggest an approach that works for me.

 

Sprint execution

In the scrum world, a team works towards creating a shippable increment, sprint after sprint, by working on the committed sprint backlog. Once the stories are accepted by the product owner, they become ready for deployment. This becomes possible by proper CI/CD (Continuous Integration and Continuous Deployment) infrastructure and other DevOps processes.

Sprint execution also includes testing (unit testing, manual or automated system testing or both), which leads to bugs being reported. Depending on the DoD - Definition of Done, we will often fix the bugs before the sprint ends and put the less severe and remaining ones to the product backlog.

Of course, if the relevant or required bugs are not fixed, and the DoD of the user story is not met, we do not mark it as done. We move the user story back to the backlog for grooming and prioritization, as per the requirement.

 

Bugs

Let’s consider a scenario where a scrum team completes sprint ‘n’ and moves on to the planning of sprint ‘n+1’. Out of the 10 bugs that got reported in the sprint n, 6  are fixed, and 4 are moved to the product backlog.

We can also get more bugs in the product backlog as a result of system testing or user acceptance testing, etc., especially if it is done at n+1 or n+2 cycle, post-sprint completion.

 

Sprint planning

During the planning of sprint ‘n+1’, we will have a prioritized and groomed product backlog that includes bugs. While finalizing the backlog for sprint ‘n+1’, the team will pick up some stories and bugs, based on the priority given by the product owner. Hence, to account for the effort required in fixing these bugs, the team will commit to a lesser number of stories.

Another important point is about ensuring that the bugs being picked go on to contribute towards fixing the product’s MVP. If fixing them does not add any significant value, they can be parked for later and can be taken up as new user stories.

 

Bug vs. new story

Story points (for a user story) include all the work required to complete the story. So, any bug, new or old in the backlog after the story is marked ‘done’, is actually “extra work after the story has been demo’ed and accepted”. However, some argue that this should be treated as a change in the requirement and a new story should be created.

I can’t agree with this argument entirely since I think it mainly depends on the way of working, as agreed within the team. I believe even if a story is accepted, it is mostly with an understanding or an implicit agreement that bugs would be picked up from the backlog.

There are instances when the reported bug is not a bug but a change. That is a different scenario and falls under change management.

 

Estimating bugs

Velocity is a measure of the volume of work a team can execute in a single sprint and is the critical metric in a scrum. It is usually calculated as the trailing average of the completed story points of the last 3 sprints.

I have seen teams choosing one of the following approaches when it comes to estimating bugs:

 

Approach #

User Stories

Bugs

Velocity

1

Story Points

Hours

In Story Points

2

Hours

Hours

In Hours

3

Story Points

Not Estimated

In Story Points

4

Story Points

Story Points

In Story Points

 

Case 1 – In approach 2 and 4, velocity includes bugs effort. i.e., it is the sum of the estimates of:

  • Stories and bugs added to the sprint backlog during the planning meeting
  • Bugs reported during sprint execution

Case 2 – In approach 1 and 3, velocity excludes bugs effort

 

Committed velocity

Case 1 –

When the team accounts for bugs in the committed velocity, we get a more accurate number of team’s potential to get the amount of work completed. However, this is a deterrent for release planning, as release burndown now will not factor in the effort for bugs. But, at the same time, the graph will show a forecast of burning faster than possible.

One workaround is to add a dummy story with story points reserved as a placeholder for fixing bugs. This is not the cleanest way but at least it makes your release burndown realistic.

Another benefit of this approach is that we can use historical data to gain insights about how much of the team’s bandwidth was consumed in bug-fixing.

Case 2 –

Once the team commits to lesser stories to account for the time they will spend in fixing bugs, the team’s committed velocity takes a dip.

However, with this approach, the velocity calculated sprint-after-sprint has the following benefits:

  • It gives us a better insight into the team’s ability to complete the remaining work. Keeping bug estimates out of velocity will make the average velocity more reliable. It is more likely to better forecast the number of remaining sprints that the scrum team needs to complete a given release. Overall, this means better release planning.
  • After analyzing the trend of velocity over sprints, we can easily identify those sprints with reduced velocity. This allows us to understand the reasons behind the negative trend. Some of the common factors are – change in capacity, inaccurate estimates, more bugs in the sprint backlog,

Conclusion

I am often asked during trainings and by team members that why do we not estimate the bugs? If we do so, there will not be any dip in the velocity.

Estimating bugs can give us a better insight on the amount of actual work done, i.e., the ‘effective velocity’ of the team.

 

However, my personal take on this is:

  • Effort spent on fixing bugs is negative work, so not accounting it in velocity helps us focus on the business value that is being delivered, sprint after sprint. Instead of adding story points to bugs, we should work towards improving the velocity by reducing the number of bugs.
  • Bugs reported during a sprint are often fixed within the corresponding story’s scope. So ideally, we do not need to add a separate estimate for it.

 

Hence, I would recommend:

  1. Calculate velocity based on user stories and do not estimate bugs, whether bugs were reported as part of the sprint or whether bugs were taken in the sprint backlog during the planning.
  2. Prioritize defects from the perspective of value addition. The product owner should be the decision-maker here. 
  3. Keep track of the bug count and establish metrics that help alert the team when the defect counts seems high or if you need to slow down and take a hardening sprint in between.
  4. Regularly conduct RCA of bugs, especially for critical bugs. Discuss the important findings in retrospectives and add them as Kaizen items to the backlog.

  • Typical causes for bugs – change in the requirements, insufficient time, not enough automated testing, inadequate focus on design, etc.
  • Standard Kaizen/improvement actions – perform automated system testing, have better code coverage in unit test cases, ensure transparent communication on the requirements during development, involve all stakeholders in grooming, etc.

 

In conclusion, it is worthwhile to remember that we influence what we measure.

If we focus on delivering value and reducing the number of bugs, we will be able to deliver more value and increase the team’s productivity and velocity, i.e., deliver more value with the same effort.

Author
Meetu Gujral
Meetu Gujral
connect