A common misconception about agile teams is that they cannot commit to dates. In fact, some organizations that practice agile development methodologies may unconsciously do so to avoid committing to deadlines. I’ve been a part of many conversations that lean on “flexible” and “agile” terms to justify an unwillingness to commit to delivery dates. I’d argue that these teams are abusing the perception of agile methodologies or misunderstanding their purpose.

To be able to answer questions about delivery dates, a team running an agile methodology needs to understand a few key pieces of data:

  1. The team’s delivery pace – better known as velocity.
  2. The velocities of planned and unplanned work.
  3. The product priorities and effort.

You may notice that there is nothing unusual about these data points — they are inherently the types of things that a seasoned agile project manager tracks. What’s unusual, or the opportunity that many agile teams are missing, is using those data points to understand delivery potential beyond the next sprint or two. If you’re measuring and actively monitoring these data points, why not do something with them?

Teams can use these data points to effectively forecast and plan delivery, albeit with decreasing certainty, several months to a year into the future. The challenge is doing this in a way that adheres to the principles of product agility while meeting the expectations of company leadership and clients.

At Rally®, we’ve struck a balance that works well for individual teams and sets of teams operating toward the same roadmap.

Starting with velocity

Using whatever sizing mechanism you rely on, you can identify your historical capacity; most teams at Rally use story point estimation with a form of the Fibonacci sequence. Based on a handful of sprints, usually at least the three most recent, we were able to determine each team’s rolling velocity.

We then segmented the historic velocity into two categories: planned work, representing the work on our roadmap, and unplanned work, which is out of scope for our roadmap tracking.

Before continuing, it’s important to clarify how we segment this work. What each team considers in these two categories is up to them, but in simple terms, we consider things like one-off stories to optimize features, tech debt, and security remediations to be exception work. That said, if a very large technical effort is necessary to meet the product’s needs, the work may be considered a roadmap item to ensure we plan for it.

By doing this, we were able to identify the amount of our velocity we were able to reliably commit to roadmap work. The exception work may be no less meaningful, but it’s not something we communicate in our roadmap.

If, after determining the relative velocities of roadmap and exception work, you find the velocity for roadmap work swings wildly from sprint to sprint, you should focus on this area to understand why there’s so much variance. We focus on bringing the velocity standard deviation as far down as is reasonable to help give us a more reliable velocity trend for our roadmap. We do this by finding ways to split out and contain the unplanned work and track it separately. This type of solution is unique to each team, but that’s what makes roadmapping fun!

Prioritizing the features on your roadmap

Assuming you have estimates for your work already identified, an area that we originally struggled with was prioritization. As a result of our first year of execution, we realized that balancing an engineering roadmap with a product roadmap was too difficult to do separately. While it’s a painful activity, forcing the team to create one list of roadmap items in priority order provides a lot of perspective for what work is necessary and makes for a more logical roadmap.

Often the product and engineering work aligns with common business goals. For example, “build feature X so that we can broaden our user base to Y million users” would include not only feature work, but also architectural work to ensure that the application is scalable and mature enough to handle the load of new users. Putting these sets of work in a single prioritized list helps to force conversations about priorities and dependencies, and aligns delivery with the business’s objectives. When building our single list, it was important to group the related product and engineering work, so if priorities shift, we knew which sets of work should move together.

After our roadmap is prioritized, we create epics for the highest priority work. The epics serve as the containers for all user stories in scope for those features. This is done to ensure we have an easy way to surface all work related to the roadmap, by epic.

Predicting the future

Tactically, our teams execute using JIRA, but we do most roadmap activity outside of JIRA. We’ve experimented with JIRA Portfolio, Project Plan, Aha!, and a few other solutions, but none provided the type of planning mechanisms and transparency/control of the outputs for what we were looking to do. Instead, we’ve built a Google Sheet that pulls data from JIRA via their API. This helps us to have a full view of both the work in flight and our epics that we have not broken down for execution yet.

We do a little bit of magic to determine whether the work is aligning with our size estimates. If we deviate from our original estimates, we discuss how we want to change the roadmap. The magic is all performed using Google Sheets and a few simple scripts that run from the sheets. The key output of this process is an update to the forecasted remaining work. We use this new data to identify deviations from the previous week. If there are changes, we decide whether to adjust our team’s allocation or our delivery timeline to complete the work.

Feature Forecasting

On the other side of the equation, every week we get a snapshot of the work closed in the past two weeks which feeds into our ongoing velocity. This is a point in time evaluation of whether the team as a whole is delivering what we thought they could. The goal here is to check against our velocity predictions. We use this partly as a motivator to the team, but primarily to indicate whether the team is actually delivering without burning itself out. And if the team is not delivering, it provides us with actual historical data that we can use to update our model.

Velocity Burndown

*If you’ve read Andrew Grove’s ‘High Output Management’ this chart should look awfully familiar. The colored section represents our forecast and the gray section represents our historic. Using a stagger chart we can easily see where we deviated and if or when we reforecasted. *

You may be wondering, “Why evaluate it every week if it’s a two-week value?” By looking at the team’s throughput every week, but over a two-week period, we help to smooth out inevitable peaks and valleys in the measurement in any given day or week. And because the teams operate at different cadences, a consistent measurement period helps us compare apples to apples, even with teams delivering their sprints at different times.

Evolving the roadmap

One important factor is that the roadmap is a living document. We review it with the team monthly and use it in roadmap review meetings with our leadership team that are held every two weeks. At Rally, there’s a lot of emphasis on using meeting time wisely, so we have a defined agenda for these meetings:

Roadmap Planning Agenda

In total, we spend 45 minutes with the product and engineering team leads and leave with a clear understanding of potential roadmap changes for the next few months. We currently plan our roadmap in detail for the next 6 months. We don’t expect that the roadmap will remain intact. In fact, that’s the point of the meeting: taking the time to identify where our predictions were wrong and then adjusting. This process is essential to improving the accuracy of our prediction model.

Without applying what we learn, we never improve. For example, if our model is showing that our actual effort on completed features is 10% over our original predictions, then we should have a serious conversation about bumping up our remaining epics by 10% to see how that affects our roadmap. Conversely, if our predictions on our velocity are showing that we’re actually delivering 20% more than we thought we could, maybe we can update our schedule and forecast to deliver some features earlier. The point is, if we’re measuring and actively monitoring, we can (and should) feed the data back into the process to refine our roadmap.

Delivering on dates with an agile roadmap

So clearly, we’re doing many proactive things to monitor and adjust our roadmap. But, what do we do when our roadmap starts to tell us a story we don’t want to hear? What do we do when we’ve communicated to a client that we’d have a feature done by February 1, but now we look like we’re behind? After all, we’re not perfect at predicting the future, as much as we might try.

Simply put, we deal with the facts head-on. If we’ve tried increasing velocity and reprioritizing our feature work, but we’re still not going to be able to deliver, we update our roadmap based on our new information and then communicate it to our clients as soon as we can. Transparency into this process with our team and our clients is essential to gaining ownership of the roadmap by the team and the trust of our clients.

Getting started

If you’re liking what you see here, let’s recap the foundation needed before you proceed. First, set up a process for evaluating your team’s velocity components regularly with the team. Then, define the means in which work will be organized as this helps force decisions on what’s in or out of scope for a given feature. After you have the basics in place, review this information consistently with your product owners, technical leads, and scrum masters.

Getting into the routine of talking about the roadmap using the foundation you’ve established with the team will support healthy discussions on your delivery plan and how the team is executing. As I said, the most important aspect is that you and the team are conscious of the evolution of the roadmap and have fresh data to support your decision making.

Learning to adjust

If you take one thing from this post, I hope it’s this: roadmapping is not about getting it right on day one. It’s about getting better as time goes by and improving your predictive confidence. Start measuring, learning, and improving, and you’ll take your first step to a more predictable roadmap and more reliable product delivery.