Operational KPIs that will let you know your Data Team is creating impact (rather fixing & firefighting)

Your data team might be buzzing away answering stakeholder questions, but that does not mean they are creating business value. Even if your stakeholders find the answers helpful in the moment and are happy with the interactions, the data team might not be impactful on a holistic level. This blog is dedicated to giving some practical ways to measure and manage business impact. More about DataOps on Analytics Anonymous with Valentin Umbach.

Impact has a couple of components:

1) Prioritization: Answering the most important questions. Prioritization refers to the fact that you are focusing your capacity on the most important things. In a future blog, I will explain how to use tools such as the Eisenhower Matrix and KanBan Board to make sure that your teams are working on the right things.

2) Coverage: Having a sufficiently large capacity and scope of your team vs what the business needs. There is a certain coverage your team needs to have to provide enough breadth of analytics to inform enough decisions to create impact.

3) Quality: Having work that is of acceptable quality. Quality of work here means that fixing bugs, re-doing work or having to deep dive into inconsistencies or other quality related follow-on questions is kept to a sustainable minimum. A piece of work that is of good quality means that it is shippable without any follow-on work to make it acceptable. What is acceptable should be stipulate by acceptance criteria.

In this blog, I will explore some KPIs (Key Performance Indicators) that show that given your are working on the correct priorities, your team is effective in outputting quality work and creating impact. I have created these metrics with by applying filters to user stories on Asana cards to keep track of the team’s operational effectiveness and work distribution using the data created by the filters in the dashboard features offered in Asana. The caveat with card-based metrics is that it counts number of cards without weighing importance or size of a task. One can use story points weigh the size. However, I suggest you do schedule regular external validations with your stakeholders to see how the metrics match their experience.

In addition to being useful management tools, these KPIs will also enable you to measure and quantify the value of investment in behind the scenes work in data teams that is often invisible and thus under invested in by organizations.

The statistical re-do ratio:

# of tasks that are fixes, re-dos, deep dives into consistencies, etc. / Total # of tasks

This ratio will tell you the amount of tasks that have to be fixed, re-done or have significant clarifications due to misalignments with data. In effect, it is the follow on work that is caused by every task.

There are some organizations who have 100% and above statistical re-do rates, which means that every task needs to be done twice on average.

Reasons for a high re-do rate can include data inconsistency, poor specification of tasks on the part of stakeholder, analysts who are too junior working on a task, lack of support within the analyst team or another context that hinders quality.

Let’s say you have a 25%

That means that you would only have to repeat one task for every 4 tasks instead of doing everything twice, statistically speaking, which would improve your analytical thought put rate.

Analytical throughput rate:

Analytical throughput rate = Analytics work capacity / Time unit

Let’s say 10 deliverables per week is the capacity your team can deliver.

If you have a 100% statistical re-do rate, that means that every task you would have to be done twice, which means that your team will effectively only do 5 deliverables per week.

Effective analytical throughput rate:

Analytical throughput rate = Analytics work to be done / (Time unit(1+statistical redo rate))


5 Units per week. In sticking with the above example, your team will be outputting 5 acceptable deliverables per week with a busy-ness level of 10 deliverables per week.

With 25% statistical re-do rate

10/1,25 = 8 units per week. That means that 8 acceptable deliverables per week will be done with a business level of 10.

Now you can quantify the business value of improvement

That means that improving the “behind the scenes” operations can deliver quantifiable improvements.

In the example above, going from a 100% re-do rate to 25% re-do rate will improve the effective analytical throughput rate by 60%, from 5 to 8 analytical deliverable per week.

„Behind the scenes work“ can include data quality, operational inefficiencies of over reliance on adhoc work, poorly designed and / or implemented processes and tools, poor documentation, under trained and / or supported workforce. If you can root cause the inefficiencies to a specific problem, then you can also quantify an ROI on investment in fixing a „behind the scenes“ problem, which will make your team more impactful – and more fun to work for.

Stay tuned in for my future blogs. You can follow me, Elizabeth Press, on LinkedIn and/or D3M Labs‘ Linked in Profle.

You can here more my ideas about DataOps on Analytics Anonymous with Valentin Umbach.

Since I am currently looking for a Director / Head of / VP level Data Leadership role please send me a message on Linked in or an email at Elizabeth.Press@d3mlabs.de if you are looking for somebody to lead your data department. Please note that I open to in-Europe remote opportunities, but am based out of Berlin permanently.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert