When it comes to technology, almost anything can be measured. Whether that measure has any real value is the key factor.
Whether you’re the head of an extensive, experienced technical team, or a non-technical leader overseeing a level of technology and product development in your business, the want and the need to be able to measure the performance of your team safely is key.
With that in mind, how do you decide on core measures that can apply to a small team and continue to stay relevant as your team continues to grow?
Is this a team or individual metric?
The first consideration when it comes to any valid measurement is understanding your data source: is what you’re measuring best measured per person, or as a collective of people?
For example, a block of developers responsible for resolving high-priority issues in live or production software will have a metric of ‘time to resolution’. The team may work independently or together on fixing any issues as they arise, and therefore it becomes a team-level metric. Collectively, your team works as quickly as possible to implement a solution and the metric is subsequently an average of the whole team’s effort, regardless of who in the team was involved at the time.
On the other hand, let’s say each junior developer in your team has a goal of working through fifteen tasks in any given month. Their capability to work through these tasks is a clear demonstration of their knowledge, ability to communicate, and overall growth as a developer. If one developer were to work through fifty tasks in the month and another was to work through five, your team may, on average, be exceeding your goal, but you have outliers at both ends of the spectrum: one which is excelling and one which needs additional support. Crucially, these are not a person-to-person comparison, they are two people working against a set benchmark. These are individual metrics.
Create benchmarks, not comparisons
It’s key to differentiate benchmarks and metrics at this stage. A metric is quantitive – i.e. 75% of our team exceeded their target for tasks completed. The target, or the benchmark, being fifteen tasks.
Benchmarks give you a platform for your team and the people that make up that team to work towards, with the metric coming from their performance against that metric. They subsequently work alongside metrics to give the full context.
Ideally, a benchmark uses historical data in your own business to give a fair reflection of an achievable target. To take a benchmark from a book, another business, or by making the best estimate isn’t safe – by safe, this means a way in which you can introduce any form of measurement without inciting stress or uncertainty on the people the measure is for. Any form of measurement has the opportunity to incite an ‘us versus them’ mentality, or a ‘why is this being measured now, is there something wrong?’.
Benchmarks that come from anywhere but your own business often fail to factor in your environment, your processes, and your team’s knowledge (and the gaps within it). Whilst it can be argued that identifying these things is positive, to do so runs the risk of negatively impacting the environment of your team – particularly if this introduction comes with no or poor prior warning.
When introducing any new metric with a benchmark, such as the fifteen tasks per month example, use existing data in your business where you can. That isn’t always possible though and in that instance, what can you do?
- Consult with those to whom that benchmark applies, allowing each person to give their thoughts on it and what a suitable measurable figure would be. Most importantly, give justification as to why this is being brought in, particularly if you’ve operated without any metrics for some time. Not only will your team feel involved in the introduction, but also understand why this is being introduced.
- Introduce the metric on an assessment-only or probationary period. For the first x number of weeks, the benchmark will be brought into place and reviewed frequently – you may find that your team, or the individuals, regularly exceed the benchmark or the inverse. Rather than testing people against your new measure, you’re testing the benchmark in your environment. In most instances, the first version of a measure is rarely the best reflection, with the first few months giving you time to capture the relevant business data you need and make changes to your environment as needed to create a metric with real measurable value.
There are instances where you can create metrics without a benchmark in place, again going back to ‘time to resolution’ – a metric could be that on average 100% of issues in a given month were resolved within thirty minutes. On the face of it, the team performed well. A benchmark internally that is calculated based on the previous twelve months of resolved issues would be to have those issues resolved within fifteen minutes. Your metric is that 100% of issues were resolved, but 0% were solved within your benchmark time. Accompanying the metric with a benchmark gives more measurable context.
Do not be ruled by one month or one metric
It’s critical when applying any form of measurement or metric to not let it define your team. Single-mindedness can deliver exceptional results towards that single measure but at the expense of other aspects.
Even in some of the largest technology businesses, lines of code (LoC) remains a loose form of measurement of developer output. It’s a simple measure that from the outside means: more code, and better output.
Far from it.
Have you ever read an email, or received a letter full of jargon and just wished it could be simpler, shorter? The same applies to code in that short and simple can be much more effective. What’s achievable in 1,000 lines of code is often achievable in 100 (or less).
If technology teams were solely based on the LoC measure, every website would be written in what’s affectionately known as spaghetti code: unreadable and unmaintainable. Fortunately, LoC isn’t an exclusive measure (and if it is where you’re working: run).
To play devil’s advocate, even if it were a sole metric, a developer that did write 100 lines in a single month is not necessarily underperforming – particularly if they go on to write 10,000 in the following period. Use a broader timeframe than month-to-month when reviewing any form of metric or benchmark.
Common metrics and benchmarks seen even in small teams that can grow with your business are:
- Time to issue resolution: the time between an issue being identified, reported, and resolved.
- Tasks completed: setting a benchmark at role level for the number of tasks each person aims to complete each month.
- Sprint size: for teams in agile, the number of items typically achieved in any given sprint.
- Release frequency: the frequency of updates to your technology, where if you work in agile this may be the end of each sprint, for other systems this may be consistently each quarter.
But you can also go one further, beyond developer output, and set benchmarks that actively promote growth as well as hands-on performance:
- Mentoring time: the average time your senior team members spend advising and mentoring junior colleagues.
- Training time: similar to the above, but the amount of time, on average, that each member of your team can allocate towards training and research. Setting a benchmark of four hours a month (for example) flips training on its head and actively encourages members to partake, rather than trying to find time as and when.