Home

Misaligned Workplace Incentives

CareerSoftware EngineeringIncentive StructuresWorkflow Management

Approaching the end of the year now for the third time since joining the software industry, I’ve noticed a very clear trend that the busyness of my job tends to drastically decrease after Thanksgiving. When December rolls around, it’s often a ghost town, we probably operate at half capacity. Given this extra free time, I’ve begun to reflect on my own job, both on the day-to-day and on the long term.

I was an Econ minor in undergrad, particularly focusing on micro, and I find discussions of game theory and incentive behavior pretty intriguing. What I’ve noticed at work is the feeling that what I’m incentivized to do on a daily and, to a lesser extent, long-term basis does not always align with what I’d consider to be the firm’s longer term goals of providing value for our clients and shareholders, as trite as that definition of goals may be.

I’ve decided not to directly mention my workplace’s name since any sort of research (on this site or LinkedIn) will quickly give you the answer of my employer for the curious. Also, I’d rather the discussion be centered around the incentive structures, rather than a criticism of the company of itself. However, I do feel okay posting this because I have brought up these discussions to my manager, team lead, and agile leads to no avail. Maybe a public Internet forum will prove to be a better channel.

Short Term - Optimizing on the Wrong Metrics

David A. Patterson, renowned computer scientist, Turing award winner, and co-author of the popular Computer Design & Organization textbook with John Hennessy, has a relevant quote which is:

For better or for worse, benchmarks shape a field.

While this originally refers to computer hardware and software performance metrics, the spirit of the quote applies just as well to management practices. If you’re optimizing for the wrong benchmarks, you’ll inevitably guide the entire organization in a less productive direction.

For some context, our team works in an “Agile” scrum methodology. Many will be familiar with this. We use Atlassian’s Jira product, work in two-week sprints completing the tickets assigned to us.

What is important in this methodology is how you measure success. Our organization has chosen two metrics to measure success, “Churn” and “Say-Do Ratio”. I’ll first reproduce these as defined by the Agile Leaders (find whatever group is the real leader on this one), and then comment on what these mean, for me, a Software Engineer II, in practice, as, in my view, have become pretty distorted.

Churn

Churn - Defined as the measure of how much planned work is modified once the sprint has begun. This could include stories added mid-sprint, stories removed, or significant changes in scope. The goal is to keep the sprint stable once it starts.

My biggest complaint is with Churn. Since we optimize around minimizing Churn, that means we are incentivized to complete exactly what is done in a sprint. No more, no less. From my understanding, this metric has been emphasized because leadership believes that development teams should be spending sufficient time doing sprint planning, and teams which do quality planning are more likely to meet deliverables.

However, from a developer’s perspective, I am no longer incentivized to consider the long term deliverable, only to complete exactly what is done in a single two week period. This means in sprints where I am ahead of schedule, I am not encouraged to pull more work because it’ll negatively impact our team’s metrics. It’s actually more “optimal” for me to wait and add those extra items to the next sprint’s planning, rather than improving throughput now. One can see how this would not motivate a developer to continue working on the backlog until such items are actually pulled into the active sprint.

In theory, leadership’s reasoning may make sense, but it breaks down in practice. Ultimately, our clients don’t care about what we plan for—they care if we can meet their needs.

Say-Do Ratio

Say-Do Ratio - Defined as the ratio of what the team “said” it would do at the beginning of the sprint to what it actually “did” by the end. A ratio closer to 1.0 (or 100%) indicates better predictability and reliability.

Turning to Say-Do ratio, I do think this is an inherently better metric as it attempts to measure follow-through and can reward teams that accurately gauge their capacity. This probably better captures the planning ability of a team, unlike churn. But which team is better, one with a consistent ratio of 0.95 or one at 1.05? I’d argue it doesn’t matter without context, that it’s really the throughput of a team per developer (or even per $) which is important. Would you want to work at an organization which optimizes for “predictability” or for getting things done?

Although management has acknowledged the risk of ‘gaming the Jira system’ in multiple town halls, the current metrics still create scenarios where individuals and teams are nudged toward such behavior. This reveals a disconnect between stated intent and practical incentives. Over the short term, someone optimizing for career and team performance is not incentivized to prioritize getting work done or to ensure their Jira board reflects reality. This is a large source of daily dissatisfaction at my job.

Long Term - Year End Reviews

At our company, there is a very formalized process for year end reviews. A contributor (myself) write their self-evaluation in October, their manager reads their directs’ self-evaluations and writes their own reviews in November, there is a formal discussion about it in December, and then in January any updates in position and compensation are revealed for the next year.

I don’t have an issue with the formalized process in itself, but I do take issue with the length of the process. When I submit my review (usually in mid-October), any work I do for the next 2.5 months of the year are objectively less important than the previous 9 months of work. From a purely rational standpoint, there’s little incentive to sprint through meaningful tasks when the evaluation period that matters is effectively closed. Either I focus some portion of my current review about objectives I did last year (almost insinuating you didn’t have enough to write about this year), or I project about what I will do in the upcoming months (then without an incentive to actually complete it). Neither option is very appealing.

Towards the end of the year, the work slows down a lot. There are real societal-level reasons why, considering the holidays and the new year, but I’d agree that this process meaningfully contributes to that slow down as well.

Conclusion

Look, the obvious answer is to do the right thing: complete all your Jiras in each sprint to the best of your ability and work through the end of the year diligently. I do my best to align with this, and I feel I do better than many of my peers. That being said, I struggle to understand why our workplace incentives can’t be updated to encourage this behavior more directly. I suspect the most likely solution has something to do with the “optics” of doing work—management wants to see predictable and stable results rather than spur-of-the-moment bursts of productivity. I’m not sure that is the best recipe for innovation.

My recommendations?

Switch from Jira boards to Kanban boards as this eliminates the time boxes of two week sprints and could allow work to flow more naturally. If you’re still convinced by the Agile Methodology and want to keep sprints, at the very least update your evaluation model to remove Churn as a relevant metric. I also think it is important to focus some metrics on individual developers to create visibility into who the most productive contributors are within a team. Invest in those who continually deliver, rather than constraining everyone to an overly rigid process.

In the long term, condense the year end review process to a 2 or 3 week period in December, allow for flexibility from managers and contributors to move the process a couple of weeks earlier if PTO will impact this timeline, and consider formalizing and emphasizing the rolling feedback throughout the year instead of placing outsized importance on a single annual checkpoint.

The low hanging fruit is there, and these relatively small changes could have drastic effects on worker productivity and deliverables, in my view.

Ultimately, who am I but some guy with strong opinions about workflow management processes who has chosen to fight the perils of an organizational structure that has become too vertical and too bureaucratic through public internet blogging? I can acknowledge that my perspective is likely limited, hopefully someone agrees with my position on misaligned incentive structures, and might try to recalibrate them for the good of the firm.