Adapt Effect Measurements to Fruit

By   July 12, 2016

The NEXES Action defines flexible Key Performance Indicators (KPIs) that consist of multiple effect measurements. This solution fulfils emergency services’ needs of fairness and respect to their individual differences. Yet, each emergency service also needs to have their own say in how the effect measurements are conducted. How can their autonomy be guaranteed?

This is the fourth article in a series of six about the NEXES Key Performance Indicators. The previous article described the flexible KPIs, where each KPI depends on multiple effect measurements. These flexible KPIs are deemed fair, respectful and motivating for emergency services.

The approach chosen by NEXES is to guarantee the autonomy of emergency services. Here, ‘autonomy’ refers to the ability that an emergency service governs itself, makes its own plans, and decides when and how to adopt specific NEXES solutions. Any approach that limits the autonomy of emergency services is destined to fail.

The autonomy of emergency services is guaranteed in two ways. First, by requiring any emergency service to select those effect measurements that are relevant to their situation. Second, by allowing an emergency service to tailor the relevant effect measurements to their specific situation. Furthermore, NEXES requires that all of these decisions and adaptations are made visible. Having these aspects visible makes it possible to compare emergency services’ adoption of NEXES solutions: the original reason for introducing KPIs.


Within NEXES, an effect measurement has a structured description. An emergency service is invited to ‘fill in the blanks’, thereby tailoring the effect measurement to their situation. For example, consider effect measurement #31, which measures Incident Location Accuracy. This effect measurement measures the discrepancy between the incident location determined by the emergency services’ operators versus the actual location of the incident. It is expected that using GPS-based location information from callers, the incident location becomes known more accurately than using the cellular base-stations triangulation algorithms, especially in more rural areas. The description of the effect measurement includes:

  • An explanation, including whether the effect measurement can be measured while the incident is handled or at an after-action review (in this case, at an after-action review).
  • An explanation of the measured value, in this case it is measured in meters, and whether the expected effect will be a higher or lower value (in this case, a lower value).

An emergency service can tailor the following parts of the effect measurement:

  • Often a lowest and highest value can be determined, in this case 0 meters (exactly pinpointed location) and e.g. 30 km as a highest value.
  • The reference value, which is used to compute a score. For example, 20 meters or less is considered to be a positive, or good, score for this effect measurement. Measured values that are (much) larger than 20 meters, are considered a bad score.
  • The period of time for which the average incident location accuracy is computed, for example 15 minutes.
  • The measurement method details how to determine the accuracy for a single incident location, and how to compute the average for multiple incidents in the time period.
  • The scoring formula (also named normalization method) that uses the reference value and the (averaged) measured value. It must compute a score as a percentage between 0 and 1, inclusive. Note that for a score the meaning is: a higher score is best. Thus, the lowest difference in location must yield the highest score.

Furthermore, emergency services can specify how important certain effect measurements are when calculating their KPIs’ scores. The mechanism provided for this, is that the score of an effect measurement gets an associated importance before aggregation into the KPI score.

So far, we’ve described flexible KPIs and a structured description of effect measurements. We’ve identified the changeable parts of effect measurements, and that each and every change must be made visible. The next article unveils how this enables the comparison of apples and oranges.


This blog is number 4 in a series of six articles on the NEXES Key Performance Indicators and Effect Measurements. When you wish to delve deeper into the NEXES Action and its solution to comparing apples and oranges we recommend to read the deliverable D2.4. Below is the list of all the articles in the series:

Photo of Niek Wijngaards. Dr. Niek Wijngaards works for AIMTech Consulting Limited in the United Kingdom and True Information Solutions in the Netherlands as senior consultant and solution architect. His focus on user-centered innovation and his work on intelligent systems and scenario-based robust decision-making provides a sound basis for the development of the NEXES flexible KPI structure, making it possible to rigorously compare apples, oranges and indeed the entire contents of a fruit basket. Niek can be contacted at n.wijngaards AT aimtech DOT co DOT uk for KPI and fruit-related questions.


EC FlagNEXES action logo Copyright © 2016, NEXES RIA, All Rights Reserved. The NEXES Research and Innovation Action has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 653337. The work on the NEXES Key Performance Indicators is co-authored by the Action partners and has benefited from the constructive comments by the reviewers. See the NEXES LinkedIn group LinkedIn Logo for an overview of NEXES colleagues. All images Copyright © NEXES unless stated otherwise.