Yes: We Can Compare Apples and Oranges!

By   July 15, 2016

The NEXES Action uses flexible Key Performance Indicators (KPIs) that each consist of multiple effect measurements. Emergency services can select and tailor effect measurements to their individual situation. How to compare the NEXES impact among emergency services? Can we compare apples, oranges and even more fruit? The key is to exploit the flexible KPIs’ principle: measure only once yet re-compute the score often.

This is the fifth article in a series of six about the NEXES Key Performance Indicators. The previous article described the parts of effect measurements that can be tailored by an emergency service.

Comparing apples and oranges, or any other fruits, is ameliorated if the comparison becomes as ‘equal’ as possible. With ‘equal’ we imply that we use the same methods, the same scores, the same importance of intermediate values when evaluating any type of fruit. However, within NEXES, we cannot use a ‘one size fits all’ set of KPIs which would immediately make the comparison ‘equal’. So we have to do a bit more work to make our comparison ‘equal’ when using the flexible KPIs.

Suppose we have two orchards: one that grows oranges and one that grows apples. The apple orchard originally selected the effect measurement on the apples’ colour and tailored it to its needs (see the example in the third blog). The apple orchard is now changing its business to only grow apples for an apple juice factory. For those apples the orchard does not select any of the effect measurements on the colours, rather the orchard selects the effect measurement on not having brown spots on the apples. The orchard reasons that it only produces apples for an apple juice factory, so the colour is not important. Yet the brown spots hint at a loss of quality, so that is deemed relevant. The oranges orchard selects the effect measurements on the orange colour and absence of brown spots.

How to assess the performance of the orchards? First of all: both the orchards grow their own fruit and are able to assess their own scores for the KPI. The apple orchard is going through a change, and wishes to compare it previous score of the KPI with its current score. The orange orchard and the apple orchard (before and after the change) have their own way of calculating a score for the KPI. Can we compare those KPI scores? Each orchard uses one or two of the possible three effect measurements.

When we look carefully at the process of assessing our flexible KPIs for an emergency service, we see that most of the effort is spent on actually measuring the effects. A lot of hard work is needed to get a measured value: we’ll need to implement data collection algorithms, collect the data, clean the data, analyse the data, apply the measurement method and compute the measured value. Consider again the orchards: ideally, each and every piece of fruit must be inspected individually according to a specific method.


Fortunately, when the hard work has been done, the actual ‘evaluation’ part is a lot easier. Computing the score for an effect measurement requires some calculation using the score formula also known as the normalization method. This can easily be achieved with the support of a simple spreadsheet. Calculating the scores of KPIs using importance-weights for scores of effect measurements is also straightforward – again using a simple spreadsheet.

When looking carefully at the hard work for conducting the measurement method and the lighter work for conducting the evaluation, we notice an interesting factor. The measurement method is specific to an individual emergency service and cannot easily be changed and redone (as it is a lot of hard work). However, the evaluation part also uses specific values of an emergency service, but this is easy to change and re-compute scores.

We now have the means to make our comparison as ‘equal’ as is possible given our comparison of apples and oranges. For each emergency service, we cannot ‘equalise’ the measurement methods: that would bring us back onto the “one size fits all” path. The measurement methods of the emergency services will be assumed to be different, although these may be similar. Based on the resulting measured values, the evaluation by emergency services is tailored to their own needs. This tailoring is what we can re-use: we can use the tailoring of any one emergency services and apply it to itself (e.g. after a change) or another emergency service (e.g., out of curiosity). So we can compare the KPIs’ scores from the perspective of any emergency service by applying tailoring to many emergency services including itself after a change. It also becomes possible to re-compute the scores for all emergency services, given new insights in e.g. the relative importance of specific measurement methods, or using similar reference values, or any other reason.

Unfortunately, there are some complications. An important factor is that each emergency service can select relevant effect measurements and thus choose to ignore irrelevant effect measurements, before and after adoption of internet-enabled communication capabilities. For example, this is the case when an emergency service changes its communication capabilities: effect measurements become relevant or irrelevant after the change. Comparing the situation at an emergency service before and after a change, does not necessarily involve the same effect measurements. This makes our comparison less ‘equal’. The good news is that we are aware that the comparison is not ideal. As forewarned is forearmed, we can carefully consider the impact of the conclusions drawn on the basis of such comparisons.

Referring to our two orchards each with their own scores for the KPIs. Knowing how each orchard measured effects and computed scores makes it possible to not blindly compare the KPI scores. The comparison for the apple orchard can still be on the KPI score at the before and after situation, given different effect measurements related to the change in apples being produced. This comparison provides insight in the effect of change in apple production as it concerns the same apple orchard. Comparison between different orchards can be indicative at most: given the different nature of tailoring of effect measurements care must be taken to avoid strict evaluations. For example, the apple orchard may have al lower score for the KPI than the orange orchard. We cannot state that the higher score means that the orange orchard is ‘better’, seeing that the flexible KPI has been interpreted differently. By knowing more, we can be fairer in the comparison and prevent drawing ‘quick & dirty’ conclusions.

The flexibility of the KPIs with the decoupled effect measurements has solved two puzzles. It is possible to have a fair, respectful and motivating assessment of the impact of NEXES solutions. And it is possible to have a structured means for comparison of the before and after situation of emergency services, introducing the principle of ‘measure only once yet re-compute the score often’. The next and final article in this series discusses whether our work on flexible KPIs has now finished.


This blog is number 5 in a series of six articles on the NEXES Key Performance Indicators and Effect Measurements. When you wish to delve deeper into the NEXES Action and its solution to comparing apples and oranges we recommend to read the deliverable D2.4. Below is the list of all the articles in the series:

Photo of Niek Wijngaards. Dr. Niek Wijngaards works for AIMTech Consulting Limited in the United Kingdom and True Information Solutions in the Netherlands as senior consultant and solution architect. His focus on user-centered innovation and his work on intelligent systems and scenario-based robust decision-making provides a sound basis for the development of the NEXES flexible KPI structure, making it possible to rigorously compare apples, oranges and indeed the entire contents of a fruit basket. Niek can be contacted at n.wijngaards AT aimtech DOT co DOT uk for KPI and fruit-related questions.


EC FlagNEXES action logo Copyright © 2016, NEXES RIA, All Rights Reserved. The NEXES Research and Innovation Action has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 653337. The work on the NEXES Key Performance Indicators is co-authored by the Action partners and has benefited from the constructive comments by the reviewers. See the NEXES LinkedIn group LinkedIn Logo for an overview of NEXES colleagues. All images Copyright © NEXES unless stated otherwise.