I am like many:
One day I’m blindly following my navigation system on the way to Hamburg, and the next day I’m annoyed by the personalized advertising that reaches me again without being wanted. As soon as I realize how much technology influences me, I try to distance myself from it and begin to trust my own judgment more.
And yet: We ultimately return to technology and interact day by day, more and more with computerized agents. We negotiate with them, we learn with them, and they understand our needs and emotions. But to what extent are we transferring our normal social rules to human-machine interaction here?
Humans are ahead of the game – at least when it comes to soft skills.
Human interaction is based on trust. In fact, there is often an underestimation of its impact. And it is precisely this that we often lack with our technical counterparts.
Therefore, computer agents are increasingly equipped with anthropomorphic features and autonomous behavior to improve their problem-solving capabilities and make interaction with humans more natural.
This presents us with new challenges in being able to make trust-based decisions. Since trust only develops over time, it’s hard to say, especially in the face of anthropomorphic computers. And do we even want a trust relationship to develop?
According to Jones and George, three characteristics of interpersonal coordination, in particular, set the stage for trust building:
- Our values: They determine our valuation basis.
- Our attitudes: Object-specific attitudes form the basis for how different people experience trust.
- Our emotions: They are part of the trust experience.
All three are very human-specific characteristics that a computer does not necessarily have. However, studies show that multiple embodiments of artificial agents have been shown to elicit natural responses in humans. How can this be?
The idea is often to endow technologies with human-like characteristics to build trust. Since human-like facial expressions and gestures evoke certain emotions, this fosters trust.
And advice from computers is more rational and objective than human advice, after all.
At least, that is what the majority expects. Yet we all have difficulty reconciling trust in computer results with the actual reliability of the results. This is due to the automation bias, through which we attribute greater power and authority to computer-generated decision support than other advice sources. One should not forget that behind every programming, there is only one human influenced by his or her values.
Ultimately, technology serves people.
We as humans have our own programming – we are social beings, and that’s a good thing. In the long term, technical possibilities will fortunately not be sufficient to imitate real encounters with people. Soft skills like trust remain with people because digital options are complementary, not substitutes.
When Miriam Mertens and I joined forces to found DeepSkill, that was exactly what was important to us. The focus should be on people and, with them, on interpersonal relationships. That’s why it’s all in the mix. The technology will continue to develop and grow.
But I am convinced, and that also applies to the future:
Technology can do many things, but we humans can do more.
 G. R. Jones and J. M. George. 1998. The experience and evolution of trust: Implications for cooperation and teamwork. Academy of Management Review