Actively stimulating autonomy of users

By   January 2, 2015

In some domains, including elderly care, the overall socio-technical solution is deliberately engineered to actively increase the autonomy of participants. In ambient-assisted living and domotica projects a social and technical environment is engineered to assist people to live their lives independently. The aim is to support people to live their lives in their own environment without requiring continuous human assistance which is becoming (too) expensive. Can technology assist, and how?

Two sides of the same coin are of importance: privacy of the participant and social activity. By having more technology in the private environment of a person, safeguards need to be in place that privacy is guaranteed. Not necessarily enforcing privacy always: consider the person falling and not moving, then privacy could be temporarily lifted to allow another person, through e.g. remote connections, to assess the situation. Social cohesion is a desired side-effect of human assistants: removing them increases the chances that elderly people who live alone may alienate themselves. The technology must strike a balance between privacy and social activity.

The technology that is to increase human autonomy must not only assist, but also be ‘aware’ of what the person needs: it must become personalised, empathic, moral, and if possible a combination of these. The technology needs sophisticated models to assess when to coach, engage or protect the person. The more sophisticated the technology becomes, the more the notion of trust plays a role. Do we, as humans, trust the technology to ‘do the right thing’, whatever that means in a specific context. Would you trust your ‘intelligent’ home? When (not)? The domain of machine ethics researches these matters and goes beyond the well-known three laws of robotics defined by Isaac Asimov in “I, Robot”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

For user-centered innovation it is an interesting thought not only to guarantee the autonomy of participants, but actively increase autonomy of participants by their participation in the solution.

Do you wish to read more?

  1. Ambient Assisted Living Joint Program (EU)
  2. Matthijs Pontier: “Emotioneel Intelligente Zorgrobots die met Moreel Besef onze Autonomie Stimuleren” (in Dutch) on SlideShare.net about research at CaMeRa (Vrije Universiteit Amsterdam)
  3. Domotica (also known as smart homes) to support people with disabilities. (Wikipedia)
  4. Machine Ethics on Wikipedia
  5. Asimov, “I, robot” and the three laws of robotics (Wikipedia)
  6. Software can adapt itself to its users (personalisation) (Wikipedia)