When (not What): Illustrated By Dilbert

I’d bet a nickel that most successful tech companies don’t have any problem coming up with things to work on, features to build, or products to launch. To the contrary, most companies probably have more of those things than they could ever have time to ever build—and 50onRed is no exception.

b678c242-3ee2-4077-abae-b46fa824d136.png

Unfortunately, a backlog of things to do does not a successful product make. What a backlog does do is be quite distracting. We’ve got 50+ tickets just for traffic platform, and that’s just one of our products.  These things range from the big (build a mobile app!) to the minuscule (bigger buttons!). While we’d love to get to do all these things at once, it’s just not feasible. Building new features always takes a backseat to issues which affect our partner relationships, our core functionality, or our bottom line.

So, with a surplus of features and a deficit of time, what is one to do?

There are a number of methods we use for prioritizing features at 50onRed, but ultimately every method boils down to three key questions:

1. How will this feature improve the user’s experience?

2. How will this feature impact the business?

2. How much will building this feature cost?

 

User Experience

Ashley, our Community Manager, keeps records of all of the feature requests we get and the categories of all of the support tickets that are opened. Every week, she presents those records to the product team. During that presentation, the team is looking for two things:

1. Are there any significant changes in support requests compared to historical data?

2. Have we seen similar feature requests come up before?

Significant increases in support requests are extremely valuable information: oftentimes, they tell us that we’ve introduced some kind of ambiguity or confusion into the platform, and that we need to reconsider recent changes and search for where they might be misinterpreted. 

When we see feature requests, we add them to a list of features to investigate, and we keep track of how often we hear them. After all, the easiest way to to find out what our users need is to listen to what our tell us they need.

92126bf6-be7c-4ce9-ba7c-353bd393a777.png

The next step in the pipeline is executed by Bernie, our Lead User Experience Engineer, and Jack, our Product Manager. Bernie and Jack write user stories for how a feature will be used by a particular user of our platform. In a previous post on the 50 blog, Jack wrote about personas (http://www.50onred.com/blog/tag/personas/). User stories are written with a persona in mind:

“As Max, I want the capability to quickly view my least profitable campaigns so that I can effectively allocate my funds."

or

“As Bob, I want to know exactly when to use server-to-server tracking and when to use javascript pixels, so that I can optimize my campaigns with the most accurate data."

The user stories are written directly on the issue ticket, so that when we consider the feature further down the line, we know exactly who it will benefit, how they will use it, and why they need it.

 

Business Impact

Once potential features have a user story, it’s important to also consider a “business story”. Will adding that feature help one user, but hurt another? Will a feature increase revenue? Could a feature cause a security risk? How does a feature fit into the overarching product roadmap?

It is important to remember that one particular persona is not the only person affected by a new feature. Take this user story, for example:

“A new affiliate marketer would like a very detailed multi-page wizard-type campaign-creation, which explains each step of the campaign-creation process along the way, with examples, recommendations, and tips."

When evaluating this, Bernie and Jack might realize a couple of things: first, this might be great for a new user, especially when they first join the platform. Two, this could be great for Ashley and the account managers, because it could cut down on support tickets for new advertisers. Three, this could be quite frustrating for a more experienced affiliate marketer, because that user makes many campaigns every day and a multi-page creation process would make that much slower. It also might become frustrating for the new marketer after the first dozen or so campaigns that he makes, once he’s gotten the hang of things.

So, Jack and Bernie might decide to re-write the ticket in a way that makes more sense for all stakeholders, or they might decide that the ticket doesn’t make sense for the product right now and decide to discard it.

 

 

Feature Cost

Once a user’s needs have been considered and the business case has been analyzed, it’s time to think about a feature will actually cost. What are all of the things that need to happen for a feature to be complete? Almost always, these tasks extend far beyond the initial development cost. On the engineering side, costs include installing or upgrading any third-party libraries necessary to build out the feature, refactoring existing parts of the codebase to accommodate changes, and, of course, long-term maintenance costs. Other costs might include building on boarding or support documentation, training account managers, or communicating new changes with partners and users.

 

Finally, there is also a hidden cost associated with each and every feature added to a product: complexity. Even seemingly benign additions contribute to the learning curve and complexity of the platform. This alone is not a reason to not build something, but it is definitely a cost that must be considered with every addition.

 

Pulling it all together

At this point, potential features have been broken down and examined in terms of the user experience, the business needs, and the feature cost. More than likely, some potential features may have been thrown out along the way, because they would have detracted from the user experience, didn’t have a compelling business justification, or would have cost too much. Potential features have also been broken down into subtasks detailing every step that needs to happen before the feature can be released.

Now, it’s easy to put all of this information into a matrix:

 

Unfortunately, this is where the framework ends. The next step, of course, is to sort each of the features by a score which incorporates each of the categories described. The weight of each category, though, will depend on company and departmental missions and goals. Consider, for example, the following two mission statements:

Opera Software:

“We strive to develop a superior Internet browser for our users through state-of-the-art technology, innovation, leadership, and partnerships."

AutoNation:

“To be America’s best run, most profitable automotive retailer."

Clearly, Opera and AutoNation are going to put very different weights on different categories—and rightfully so. And, because each company defines product success differently, a one-size-fits all approach would inevitably lead to failure for one or both of them. 

Getting it built

What remains after prioritization is only finding time in a given sprint to accommodate the top ticketsand following up. Every new feature that is added should inform future features: was the cost estimation accurate? Did users use the feature as much as we expected? Did the feature affect the bottom line? Checking back after implementation to verify assumptions is important, because those same assumptions will probably be used again to inform future features. If an assumption is wrong for one feature, it will probably be wrong for subsequent ones, too.

So, let’s review: when prioritizing features, listen to your customers. Find out what they want and find out what they’re saying. Evaluate how features can affect their experience, and then consider how those same features will affect the business as a whole. Is there a compelling reason to build something? Next, look at all of the new feature costs, including the hidden and long-term ones. Using that information, sort issues according to your company and department goals and objectives. Finally, build the feature and verify your assumptions, so that next time, your prioritization is as effective as it can possibly be.