Minds and Bodies: Who and What do we Trust in HRI?

Tom Williams
Mines Robotics
Published in
6 min readMay 20, 2021

--

This post summarizes our research paper “Deconstructed Trustee Theory: Disentangling Trust in Body and Identity in Multi-Robot Distributed Systems” by Tom Williams, Daniel Ayers, Camille Kaufman, Jon Serrano, and Sayanti Roy. This work was published and presented at HRI 2021.

What do you think about when you think of a robot? Probably something like C-3PO from Star Wars… or your Roomba. Either way, you probably think of the robot in question as a single, discrete, individual entity, with a single mind, and single body. After all, we often understand robots by applying “scripts” from human-human interaction (cf. Chad and Autumn Edwards’ work). So psychologically this way of thinking about robots is well justified. Unfortunately, it’s not terribly accurate for most robots!

As an example, lets consider NASA’s Astrobee robots, Honey, Queen, and Bumble (see above visualization of free-floating Astrobees). While these robots have distinct bodies, and (cute!) individual names, they’re all part of the same networked system. It’s probably more accurate to think of them as one mind with three bodies.

What this means is that for these robots, the “identities” of Honey, Queen, and Bumble are performative. The identities are just a polite fiction enacted for human benefit so that it’s easier to reason about and talk about these robots.

The fact that the unique identities of these robots are “performed” suggests that robots could be designed to perform identity in different ways.

As an example, in our paper we consider the potential for “performative reembodiment and coembodiment”. In the “Reembodiment” and “Coembodiment” design patterns being explored by Aaron Steinfeld and Jodi Forlizzi’s students at CMU, different (fictional!) “identities” appear to migrate between bodies or co-inhabit the same body. And in our work, one of our points is that robots can “appear” to reembody or coembody without actually doing so “behind the scenes”. For example, a robotic system with multiple bodies and a single cognitive architecture could “put on a show” of having one (performed) identity “hop” between bodies (when no such thing is actually happening at the software level).

The reason why this performativity is important is because whether and how identity is “performed” in multi-robot distributed systems should change the way that humans internally represent and reason about those robots. That is, if robots don’t need to have (or appear to have) tightly associated bodies and identities, then perhaps humans should (especially under non-humanlike performative strategies) construct distinct mental representations for bodies vs identities.

This in turn is especially important with respect to how we think about human-robot trust. If people develop separate mental representations for robots’ bodies versus the named “identities” that seem to be hopping between those bodies, then that would mean that people should be establishing different levels of trust in those robot bodies vs those robot identities!

In our paper, we use this idea to motivate a new theory of Human-Robot Trust, which we term Deconstructed Trustee Theory, where we think separately about the trust in a robot body vs the trust in a robot identity. In our paper, we also propose an account for how these differentiated “Loci of Trust” may form, and how different levels of trust may be built up in them.

Specifically, we propose a hybrid account in which (1) Mental representations of trustees are created through top-down application of “scripts”, and (2) these representations are refined through observation of trust-relevant actions that are either praiseworthy or blameworthy. This distinction between praiseworthy and blameworthy actions is important because recent work from Bertram Malle & colleagues suggests that Blame is more differentiated and intense than Praise. This suggests to us that when robots are observed performing Blameworthy actions, this should lead observers to be more critical in who they’re choosing to blame and lead to increased divergence between trust in body and trust in identity, relative to Praiseworthy actions.

Put more simply, if you see someone do something bad, you’re going to say “Hey waitaminute! That’s not right! Maybe I shouldn’t be trusting… whoever did that thing… who exactly did that thing?” Whereas when you see someone do something bad you might be a little less critical in terms of precisely figuring out who or what you’re intending to praise.

So far this is all just speculation. Time for an experiment! We ran an online experiment with 210 people, where we separately measured body- and identity-oriented trust after blameworthy or praiseworthy actions, under humanlike and non-humanlike identity performance strategies. Specifically, participants watched a yellow robot introduced as “Honey” and a purple robot introduced as “Bumble”. After Bumble left the room, one of two things happened: either Honey reported that Bumble had found or caused a leak (performance of clear 1–1 association of body and identity, shown on the right below) or Bumble’s voice came out of Honey’s body (apparently momentarily “possessing” Honey; a clear “breakage” between body and identity, as shown on the left below).

So what did we find?

(1) When a robot “cedes control” of its body to another “identity” (performative re-embodiment / co-embodiment) this weakens its “locus of trust” (it’s perceived less strongly as an agent it makes sense to consider whether or not to trust).

(2) This locus-of-trust-weakening *may* have also happened for robots taking blameworthy actions.

(3) We didn’t see any divergence between trust-in-body and trust-in-identity on the basis of identity performance strategy… but this may have been in part because this divergence was happening anyways regardless of identity performance strategy!

(4) When the non-humanlike identity performance strategy was used, more trust was built overall in the robot whose actions were being reported.

(5) And finally, robots taking blameworthy actions did indeed see more divergence between body and identity trust!

Overall, these results suggest that we need to radically rethink how we think about human-robot trust… and provide a cool new theory to explore in future work!

If you want to find out more about this work, read the paper or watch the video below!

You may also be interested in two workshop papers we co-authored with Katie Winkle on different aspects of robot identity:

Design, Performance, and Perception of Robot Identity” by Ryan Blake Jackson, Alexandra Bejarano, Katie Winkle, and Tom Williams (Video)

On the Flexibility of Robot Social Identity Performance: Benefits, Ethical Risks and Open Research Questions for HRI” by Katie Winkle, Ryan Blake Jackson, Alexandra bejarano, and Tom Williams (Video)

--

--

Tom Williams
Mines Robotics

Tom Williams is an Assistant Professor of Computer Science at the Colorado School of Mines, where he directs the MIRRORLab.