top of page

Research Projects
Individualized Adaptations to Calibrate Multi-Human Multi-Agent Team Trust
Infusing trustworthiness in robots
Sponsor: DEVCOM Army Research Lab (ARL) via STRONG (Strengthening Teamwork for Robust Operations in Novel Groups) Collaborative Research Alliance (CRA)
Senior investigators: Paul Robinette, Kshitij Jerath, Reza Ahmadzadeh
Junior investigator: Russ Perkins
An individual human or autonomous agent will trust other teammates to perform tasks based on prior experience with the agents, situational factors, their own propensity to trust, and the characteristics of the agents themselves. In many scenarios, one may not have significant prior experience with each other, so a trust decision will be based on the small subset of teammates’ abilities that they have seen so far. Agents currently do not have capabilities to enable a human to calibrate their over- or under-trust in agents.
In our work we designed strategies for an agent to convince their teammates that they should or should not be trusted, so as to become effective teammates, including:
(a) Agent adaptation: change themselves to align with human trust requirements.
(b) Human preparation: help humans learn the actual capabilities of the agents to better calibrate their trust in these agents.

bottom of page