Leveraging Features of Human-Technology Teams to ...

2 downloads 0 Views 818KB Size Report
The future vision of military robotics is one in which robots will serve as integrated members of Soldier—robot teams. ... building human-like responses into robots will allow people to draw upon similar ... A study by Sims, et al. (2010) .... Page 3 ...
Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

1179

Leveraging Features of Human—Technology Teams to Support Mental Models in Future Soldier—Robot Teams Elizabeth Phillips, Scott Ososky, and Florian Jentsch University of Central Florida The future vision of military robotics is one in which robots will serve as integrated members of Soldier—robot teams. Robots will possess capabilities that will transition their role from functional tools to working teammates. Because robots and Soldiers will be deployed in environments characterized by uncertainty, complexity, and violence, it is imperative that Soldiers have accurate mental models of what their robotic teammates can do, cannot do, and will likely do. In this paper, we present the conclusions of a review into metaphors for facilitating accurate mental models of robotic teammates. Emphasis was placed on investigating existing human—technology teams (i.e., human teaming with automated systems including autopilot in cockpits, driver assistance systems, and personal assistant applications among others) for features that can support accurate mental models for Soldiers in future Soldier—robot teams. INTRODUCTION The future vision of military robotics is one in which robots will serve as integrated members of dismounted Soldier—robot teams. Meeting this vision will require a transition in robots from functional tools to working teammates (Army Research Laboratory, 2011). Because robots will be deployed in mission critical environments, it will be necessary for Soldiers to have an accurate understanding of what their robotic teammates can do, cannot do, and will likely do. However, prior research has shown that people, in general, hold ill-formed and inaccurate mental models of robots and robotic technologies, and are easily influenced by robot appearance, dialogue, and origin, as well as other external features (Phillips, Ososky, Grove, & Jentsch, 2011) including motion behaviors (Ososky, 2013).

Not subject to U.S. copyright restrictions. DOI 10.1177/1541931214581246

The human metaphor applied to robots Prior research has also shown that accepting robots as partners is enhanced when robots possess human-like characteristics. For example, Torrey, Fussell, and Keisler (2013) recently found that robots who provide instructions for completing a task using verbal hedging statements (e.g., like, sort of, I suppose, uhm) are considered more likeable and considerate than when giving instructions more directly. Moreover, some researchers assert that the humanization of robots (including engendering emotions and personalities in robots) is the key for providing humans with a basis for understanding interactions with robots. That is because building human-like responses into robots will allow people to draw upon similar human—human interactions with which they are already familiar. These researchers assert that drawing on familiar interactions is essential for building trust in robots and ultimately better human—robot interaction (Bruemmer et al., 2004). Other researchers maintain that developing robotic “teammate-likeness” will never be an achievable goal.

Applying a metaphor of human—human team interaction to human—robot interaction is inappropriate because the qualities that make human teams successful are rooted in skills that come naturally to humans and are extremely challenging for robots (Groom & Nass, 2007). Also, developing accurate mental models of robots is difficult when robots have humanlike qualities (Phillips et al., 2011). Research has shown that humans make assumptions about robot intelligence based on human-like features. A study by Sims, et al. (2010) found that platforms with human-like movement generators (i.e., arms and legs) were rated as the least aggressive and most intelligent over platforms with less human-like movement generators (i.e., wheels and treads) A study by Kiesler and Goetz (2002) found that participants rated a robot as more extroverted based on whether the robot’s dialog was playful or serious. These results are important because they highlight the malleability of human mental models of robots. The consequence of this malleability can lead to an unclear understanding of a robot’s abilities (e.g., intelligentcompetence, reliability), overreliance on the system, inappropriately calibrated trust, and/or possibly ignoring the system all together, especially when the abilities of robots do not meet human expectations (Ososky, Schuster, Phillips, & Jentsch, 2013; Phillips et al., 2011). Accurate mental models for Soldier—robot teaming We maintain that in military settings, where the accuracy of information is crucial to mission success, having an accurate and somewhat detailed understanding of a robotic teammate is imperative (Phillips et al., 2011). The use of the human, and human—human team metaphors to describe robotic intelligence, abilities, and human—robot team interaction may lead to overly presumptuous estimates of system capabilities, reliability, and functional intelligence. Many examples of Soldiers attributing animacy to, and empathy for, their robotic tools have already been found (Armstrong, 2013).

Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on May 9, 2016

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

Therefore, it is important to consider the metaphors used to convey knowledge of how robots will interact with human teammates and the roles they will fulfill in future human— robot teams. These will have important implications for the types of mental models that are formed and calibrated about robotic teammates. A metaphor of a human—animal or human—machine system may help facilitate an understanding of the task and the team, including that that robotic partners are not perfect replicas of humans and that reliability problems will likely be inevitable. Human—non-human teams While many humans do not have significant experience working in teams with robots, there are several domains in which humans do have experience interacting with nonhumans to accomplish meaningful work. Examples include teaming with both biological and non-biological entities (e.g., human–animal teams, human—automation teams). Relevant non-biological examples of human—non-human teams include human use of autopilot in cockpits, in-vehicle driver assistance systems, and personal assistant applications like Apple’s Siri, or Google’s Now. What is important to note in these existing human—nonhuman teams is that exact replications of human capabilities are not required to accomplish work. For example, human— animal teams are capable of completing a wide variety of tasks (e.g., herding, law enforcement, mine clearance) with limited communication capabilities and limited vocabulary sets (Fincannon, Leotaud, Ososky, & Jentsch, 2011). Non-human partners serve to extend and/or enhance human capabilities as well as free up human cognitive resources. Therefore, existing human—non-human teams can serve as useful metaphors for characterizing features of future robotic teammates that can provide guidance for the design of future Soldier—robot team interaction , and support the development of accurate mental models in Soldier—robot teams. For military teams, accurate information is critical to mission success as well as the safety of the team and civilians. As such, the quality of the mental model that a Soldier holds regarding a robotic teammate is as critical to favorable mission outcomes as the features and abilities of the robotic teammate itself. Review and purpose The purpose of this research project was to explore an alternate metaphor of human—robot interaction (as opposed to the metaphor of human—human interaction) that may provide useful guidance for building into robots the features and abilities necessary to support accurate mental models for Soldiers in future Soldier—robot teams. Specifically, we were interested in reviewing human—non-human teams with a particular emphasis on human teaming with technological entities (i.e., automated systems including automated flight decks, driver support systems, and personal assistant applications among others). We began by collecting literature regarding task-oriented human—technology interaction. Collectively, the team identified over 25 scholarly articles, patents, and conference

1180

proceedings that described humans interacting with various technologies in team tasks. The purpose of our review was to identify features and characteristics of human—technology teams (e.g., supporting capabilities, task allocation, levels of automation) that may be applicable to the development of human—robot teams. Drawing on knowledge gained from evaluating the articles described above, we identified several overlapping features of many of the teams included in our review. In this paper, we present the conclusions of a review of existing human–technology teams, with a particular focus on features of human—automation interaction that are relevant to future human–robot teaming. Recommendations for supporting accurate mental models and teaming between Soldiers and robots are also provided. FEATURES OF HUMAN—TECHNOLOGY TEAMS Feature 1: Teammates need not be exact replicas of humans Our review revealed several important themes regarding human interaction with technology, in teammate-like working relationships, within task-oriented environments. The first theme is that it is not necessary that technological teammates possess capabilities that model those of humans. In fact, the majority of the systems evaluated in our review possess only a very small subset of capabilities. These machines / interfaces / software / hardware perform a limited set of functions in the human—non-human team, but do so very well. For example, the Dynamic Active Display (DAD) driver assistance system is a large-area windshield display and monitoring system that acts as an advanced warning device for drivers (Doshi, Cheng, & Trivedi. 2009). This system functions to alert drivers to potential road hazards and hazardous driver states. Similar to some human—animal teams (e.g., assistive service animals), the DAD augments the skills of the human by providing additional sensing capabilities, monitoring human states of inattention, and drawing human attention to safety-critical cues. The DAD system acts as a mechanical co-pilot and not a mechanical auto-pilot for the driver. While the capability to instantiate auto-pilot for the driving task is being developed (e.g., Google driverless car) (Fountain, 2012), this does not necessarily mean that removing humans completely from the driving task will make driving safer, faster, or more reliable. In fact, lessons from aviation safety and the use of automation in the flight-deck have reinforced this fact (Wiener & Curry, 1980; Parasuraman, Molloy & Indramani, 1993). By keeping humans in-the-loop, the DAD driver system is performing a teammate-like role in the driving task. Functioning as a copilot (as opposed to an auto-pilot) is important because it helps to address some of the problems associated with automation induced complacency while simultaneously filling in gaps in human information sensing and processing. By deriving knowledge of the human operator’s state, the DAD is developing its own internal representation of teammate knowledge to predict the human’s mental model of the situation, and make appropriate adjustments or

Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on May 9, 2016

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

interventions as necessary. By virtue of possessing only a subset of human capabilities, the DAD system is facilitating a much more teammate-like interaction between human and machine, where both parties work together to arrive safely at the intended destination. Feature 2: Smart task allocation Awareness of the tasks that each entity in human— machine systems performs well is another feature that supports teammate-like interactions with non-human teammates. More specifically, supporting meaningful work between humans and robotic entities / computers is currently being enabled via smart-function / task-allocation. The notion of function allocation in human—machine systems is not a novel concept. In fact, human factors research on the subject dates back to the 1950s (i.e., the Fitt’s List) (Fitts, 1951, as cited in de Winter & Dodou, 2011). While the original Fitt’s list does not represent the panacea of design guidance when allocating tasks in human—machine systems, it does represent a starting point for smart function allocation (de Winter & Dodou, 2011). Our review of the materials that were collected revealed that much of the original theory behind the Fitt’s list has held true with respect to human—automation / human—nonhuman teams. Namely, when technology can perform functions that are repetitive, precise, and stable elements of the task, humans are freed up to perform dynamic elements that may require flexible procedures. Our review revealed a great example of function allocation in human—technology teams working in pharmacy settings. Pharmacy settings included pharmacies at Walter Reed Medical Center, The National Naval Medical Center as well as various U.S. Air Force Medical Centers. These medical centers have employed technologies that function as intelligent inventory management systems. Automated entities handle precision tasks (e.g., retrieving and stocking medications, applying labels to bottles, and counting pills) which greatly reduce the likelihood of human error and increase the speed in which pharmacists can fill prescriptions. Similar automated systems are seen in manufacturing settings as well. The automated manufacturing control system described by Monette, Correiveau, and Dubois (U.S. Patent No. 7,286,888, 2007) is a large scale system for tracking the movement of individual components in a manufacturing setting. These automated pharmacy assistants and manufacturing control units help to support the job/task, team interaction, and team mental models that are shared amongst the individuals interacting within the human—technology teams. For example, by automating the tasks that are repetitive, dull, and time consuming, the intelligent pharmacy management systems help to support specific knowledge of the roles and responsibilities of each member of the team. Similarly, by utilizing RFID technology for tracking the manufacturing process, the automated manufacturing control system facilitates efficient information flow among the human–automation team. This tracking provides human team members with access to knowledge of the current state, past

1181

state, and future state of the each part moving through the manufacturing system. This knowledge is vitally important to the manufacture process, as parts need to be routed to different machining stations in a specific sequence in order to be tooled correctly. As such, the handling of the equipment by human operators is highly role interdependent. The automated manufacturing system facilitates role interdependence in the team interaction by storing and providing structured knowledge of the information flow. Inclusion of technology into the manufacturing team helps to ensure that all members are in agreement as to the sequence of actions that will be required to assemble a finished product. In addition, allocating tasks to members of the team that are best suited to perform them is a means by which team knowledge (i.e., knowledge of teammate’s skills, attitudes, preferences, and tendencies) can be facilitated in human— non-human teams. This is important because task allocation helps to supply humans with knowledge of the distributed expertise of each entity within the team. It also allows members to predict how to allocate resources according to team member expertise (Cannon-Bowers and Salas, 2001). Feature 3: Support perception, working memory, and aided decision making The majority of teams that we reviewed included a technological team member that automated or served to augment human sensory perception, working memory, and/or decision making. Of the major tasks performed by the technological team members, all of them supported human sensory capabilities. This insight has important implications for helping to determine the roles in which robotic teammates can fulfill in future human—robot teams. Meaning that, we can draw insight into designing robotic teammates that serve to perform the functions that humans will need most, that is, tasks that require superior sensory perception, enhanced working memory, and/or aided decision making. The Freeform Command and Control (F2C2) system is a good example of a system that serves a teammate like role by supporting sensory perception, working memory, and decision making. F2C2 is a system to aid in the command and control of multiple unmanned autonomous assets (UAS). The F2C2 system is a prototype for planning and executing multiple UAS missions (Scerri, Van Brackle, Rouff, Dick, & Vallandingham, 2012), which allows users to use a drawing interface to sketch and update mission plans as new information is received. In addition, new information is provided by incoming sensor data that is managed by the system. By aggregating and managing the sensor data, sensory perception is enhanced and the strain on human working memory is reduced. Because planning helps team members develop and maintain a shared set of goals, ideas for courses of action, and expectations for task performance (Stout, Cannon-Bowers, Salas, & Milanovich, 1999), F2C2 helps to foster task mental models. By facilitating collaborative and iterative mission planning, the human—technology team works together to outline a set of likely scenarios and contingencies allowing the team to come to a shared agreement about the best course of

Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on May 9, 2016

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

action. Similarly, the system utilizes user-generated plans to suggest problems or alternatives when necessary. Generating decision alternatives helps to provide the team with predictions about the future state of autonomous assets. This feature is beneficial for reducing the strain on human working memory. Further, the system facilitates shared mental models for a task that can be unpredictable at times, requires iterative planning, and real-time changes. By giving teams the ability to adapt quickly and coordinate actions in the presence of changing task demands, the F2C2 system fosters a vital characteristic of high performing teams. RECOMMENDATIONS There are several examples in which humans participate in team-like interactions with technological entities to perform work. These interactions can lead to the development of many types of shared mental models among human members of the team. Based on our review, we have identified several features of technological teammates that support different types of shared mental models in humans. From this review we offer the following recommendations: Recommendation 1: Focus on specialization and not necessarily generalization The robotics research and engineering community will need to decide upon a specific subset of capabilities that should be designed into a future robotic teammate. More specifically, the community will need to decide upon the subset of capabilities that will be important for the team in the context of the task domain. Technological partners do not need to possess the same skills as their human counterparts in order to be useful members of the team. By focusing on the specific skills that are needed for a particular task, we can target efforts on developing these capabilities well, and maximize the usefulness of a future robotic partner in the team. For example, the capability to anticipate human decision making is a skill that will be important for supporting patrol tasks. For these tasks, it may be more useful that the robotic teammate be able to garner decision making preferences from their specific partner or small group of partners, not necessarily all possible partners in which the robot might interact. Meaning that, it may be more useful to human teammates that their robotic counterpart has knowledge of their decision making tendencies, not the decision making tendencies of humans in general. Recommendation 2: Decide what role(s) robotic teammates will fulfill in future human—robot teams and allocate tasks accordingly Being specific about what roles robotic team members will fulfill in near future human—robot teams will be important for fostering teamwork. While conducting our review, we noted several human–automation teams that utilized this type of design thinking. By deciding, now, what specific aspects of the task the robot will accomplish, we can begin to build robotic entities that are suited to performing

1182

these tasks. Just as managers of human—human teams know to allocate tasks according to individual employee strengths, deciding what roles/tasks robots will perform can help humans to better understand the strengths of their robotic counterparts and use them efficiently in the field. Clear task allocation can help to build appropriate mental models of robotic teammates as it will foster knowledge of the distributed expertise within the team, as well as enhanced understanding and prediction of robot behaviors. Recommendation 3: Reduce strain on human cognitive capacity by supporting sensory perception, working memory, and decision making Of the human—automation teams we reviewed, many assisted human partners by supporting / enhancing human sensory processing, working memory, and/or aided decision making. Also, of the major tasks performed by the technological teammate, all had the capability to augment sensory processing. Automated teammates helped to foster teamwork by filtering information that was not useful to the task at hand, and redirecting human cognitive resources towards completing mission/task goals. Further, automated team members facilitated teamwork by transforming incoming sensory data into a set of contingencies that supported human decision making. Therefore, humans may be better served by teammates that help to enhance sensory information, and present decision alternatives, while allowing decision selection to be completed by the human. This recommendation is also supported by prior research that has shown that the automation of information processing can have differential outcomes on human—human teamwork (Wright & Kaber, 2005). Specifically, high levels of automation that included the selection of decision options led to increased communications between team members that were unrelated to performing the task and higher workload within the team. The researchers concluded that team coordination would be better supported under conditions of low to moderate levels of automation related to information acquisition and analysis. CONCLUSIONS It is important to note that humans, in general, may not have much experience working in teams with robots. However, because technology continues to be assimilated into common workplaces, humans will continue to be exposed to teaming with other forms of automated entities. Further, many of these technologies are not necessarily as “advanced” as what future military robot teammates are desired to be. As a result, it is worth acknowledging that robots are different than automation in a number of distinct ways. For instance, robots have their own internal power sources that drive actuation and locomotion. Robots (especially those intended for the military) will also be expected to perform in dynamic environments and across different kinds of tasks. Many of the automated entities reviewed for this project perform relatively stable tasks in constrained environments. Because future robots will be able to complete tasks without direct human oversight, overly

Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on May 9, 2016

Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting - 2014

presumptuous expectation of their capabilities may be unavoidable without appropriate training interventions. However, human—automation teams likely represent an intermediate step in the progress towards non-biological autonomous teammates. By drawing on these metaphors we can leverage the means by which mental models and teamwork are facilitated, even if the technology that supports this teaming is primitive by comparison. ACKNOWLEDGMENTS The research reported in this document was performed in connection with Contract Number W911NF-10-2-0016 with the U.S. Army Research Laboratory. The views and conclusions contained in this document are those of the authors and should not be interpreted as presenting the official policies or position, either expressed or implied, of the U.S. Army Research Laboratory, or the U.S. Government unless so designated by other authorized documents. Citation of manufacturer's or trade names does not constitute an official endorsement or approval of the use thereof. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation heron.

REFERENCES Armstrong, D. (2013, September 13). Emotional attachment to robots could affect outcome on battlefield. Press release, University of Washington. Army Research Laboratory (2011). Robotics Collaborative Technology Alliance (RCTA) FY 2011 annual program plan. Retrieved from http://www.arl.army.mil/www/pages/392/rcta.fy11.ann.prog.pla n.pdf Bruemmer, D., Few, D., Goodrich, M., Norman, D., Sarkar, N., Scholtz, J., & Yanco, H. (2004). How to trust robots farther than we can throw them. In CHI’ 04 Extended Abstracts on Human Factors in Computing Systems (pp. 1576-1577). ACM. Cannon-Bowers, J. & Salas, E. (2001). Reflections on shared cognition. Journal of Organizational Behavior, 22, 195-202. de Winter, J. C. F., & Dodou, D. (2011). Why the Fitts list has persisted throughout the history of function allocation. Cognition, Technology, & Work, 1-11. Doshi, A., Cheng, S.Y., & Trivedi, M. M. (2009). A novel heads-up display for driver assistance. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 39(1), 85-93. Fincannon, T., Leotaud, P., Ososky, S., & Jentsch, F. (2011). Using sheepdog trials as a mental model metaphor for human interaction with an intelligent robot. University of Central Florida: Orlando, FL. Fountain, H., (2012, October 26). Yes, dirverless cars know the way to San Jose. The New York Times. Retrieved from http://www.nytimes.com Groom, V., & Nass, C. (2007). Can robots be teammates?: Benchmarks in human—robot teams. Interaction Studies, 8(3), 483-500. Kiesler, S., Goetz, J., (2002). Mental models of robotic assistants. Proceedings of the Conference on Human Factors in Computing Systems (CHI2002). Minneapolis, MN, 576–577. Ososky, S. (2013). The influence of task role mental model on human interpretation of robot behavior. (Unpublished doctoral dissertation). University of Central Florida, Orlando, FL. Ososky, S., Schuster, D., Phillips, E., & Jentsch, F. (2013). Building appropriate trust in human—robot teams. In Association for the Advancement of Artificial Intelligence Spring Symposium (pp.

1183

60-65), Palo Alto, CA: Association for the Advancement of Artificial Intelligence. Parasuraman, R., Molloy, R., and Indramani, S.L. (1993). Performance consequences of automation-induced “Complacency”. The International Journal of Aviation Psychology, 3(1), 1-23. Mathieu, J. E., Heffner, T. S., Goodwin, G. F., Salas, E., & CannonBowers, J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85, 273-283. Monte, F., Corriveau, A., & Dubois, V. (2007). U.S. Patent No. 7,286,888. Washington, DC: U.S. Patent and Trademark Office. Phillips, E., Ososkly, S., Grove, J., & Jentsch, J (2011). From tools to teammates; Towards the development of appropriate mental models for intelligent robots. Proceedings of the Human Factors and Ergonomics Society, 55(1), 1491-1495, doi:10.1177/1071181311551310 Scerri, P., Van Brackle, D., Rouff, C., Dick, H., & Vallandingham, K. (2012). Freeform command and control of robotic platforms. American Institute of Aeronautics and Astronautics Sims, V.K., Chin, M.G., Sushil, D.J., Barber, D.J., Ballion, T., Clark, B.R…Finkelstein, N. (2010). Anthropomorphism of robotic forms: A response to affordances? In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49(3), 602605. doi: 10.1177/154193120504900383 Stout, R.J., Cannon-Bowers, J.A., Salas, E., & Milanovich, D. (1999). Planning, shred mental models, and coordinated performance: An empirical link is established. Human Factors: The Journal of the Human Factors and Ergonomics Society, 41(1), 61-71. doi: 10.1518/001872099779577273 Torrey, C., Fussell, S.R.., & Keisler, S. (2013). How a robot should give advice. In Proceedings of the 8th International Conference on Human—Robot Interaction (HRI) (pp.275-282). Tokyo, Japan: IEEE. Wiener, E. L., and Curry, R. E. (1980). Flight-deck automation: Promises and problems (NASA-TM-81206, A-8210). Retrieved from http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/1980001754 2_1980017542.pdf Wright, M. C., & Kaber, D. B. (2005). Autorotation of information processing on teamwork Wright & Kaber 2005. Human Factors: The Journal of the Human Factors and Ergonomics Society, 47(1), 50-66. doi:10.1518/001872005365377

Downloaded from pro.sagepub.com at HFES-Human Factors and Ergonomics Society on May 9, 2016