Secret Intelligence Service
Artificial Intelligence Primer (II)
The Future State Project
Iteration as of 2018
Humans and robots working together / cross-training
Swapping of roles improves efficiency as well as robots’ confidence and humans’ trust.
Spending time in another person’s shoes can help to learn much about them. The same approach is being used to develop a better understanding between humans and robots / humanoids, so to facilitate working together as a team.
Robots/humanoids are, as we know, are increasingly being introduced into manufacturing, administration, education and sales functions so to perform tasks that bring them into closer contact with humans. But while a great deal of work is being done to ensure robots and humans can operate safely side-by-side, effort is being expended to make robots/humanoids sufficiently smart to work effectively with people.
Humans are not robots, humans don’t do things the same way every single time so there is a mismatch between the way robots are programmed to perform tasks in exactly the same way each time, so what needs to be done if they are going to work in concert with people?
Most existing research into making robots better team players is based on the concept of ‘interactive reward’, in which a human trainer gives a positive or negative response each time a robot performs a task.
However, human studies carried out by the military have shown that simply telling people they have done well or badly at a task is a very inefficient method of encouraging the working well as a team.
Investigative studies have been carried out into whether techniques that have been shown to work well in training people could also be applied to mixed teams of humans and robots/humanoids. One such technique, known as ‘cross-training’ allows team members to exchange roles with each other on given days. This allows people to form a better idea of how their role affects their partner/s and how their partners role affects them.
Similar experiments have been carried out with mixed groups of humans and robots/humanoids, demonstrating that cross-training is an extremely effective team-building tool.
To allow robots to take part in the cross-training experiments, the pair first had to design a new algorithm to allow the devices to learn from their role-swapping experiences. So existing reinforcement-learning algorithms were modified to allow the robots to take in not only information from positive and negative rewards, but also information gained via demonstration. In this way, by watching their human counterparts switch roles to carry out their work, the robots/humanoids were able to learn how the humans wanted them to perform the same task.
Each human-robot team then carried out a simulated task in a virtual environment, with half of the teams using the conventional interactive reward approach, and half using the cross-training technique of switching roles halfway through the session. Once the teams had completed this virtual training session, they were asked to carry out the task in the real world, but this time sticking to their own designated roles.
The period in which human and robot were working at the same time (concurrent motion) increased by 71 percent in teams that had taken part in cross-training, compared to the interactive reward teams. It was found that the amount of time the humans spent doing nothing, while waiting for the robot to complete a stage of the task, for example, decreased by 41 percent.
When the pair studied the robots themselves, they found that the learning algorithms recorded a much lower level of uncertainty regarding what their human teammate was likely to do next (a measure known as the entropy level), if they had been through cross-training.
When responding to a questionnaire on conclusion of the experiment, human participants in cross-training were far more likely to say the robot had carried out the task according to their preferences than those in the reward-only group, and reported greater levels of trust in their robotic teammate. This is the first evidence that human-robot teamwork is improved when a human and robot train together by switching roles, in a manner similar to effective human team training practices.
This improvement in team performance could be due to the greater involvement of both parties in the cross-training process. When the person trains the robot through reward it is one-way. The person says ‘good robot’ or the person says ‘bad robot,’ and it is a very one-way passage of information. But when you switch roles the person is better able to adapt to the robot’s capabilities and learn what it is likely to do, and so it is argued that it is adaptation on the person’s side that results in a better team performance.
The work demonstrates that strategies which are successful in improving interaction among humans can often do the same for humans and robots/humanoids. People easily attribute human characteristics to a robot and treat it socially, so it is not entirely surprising that this transfer from the human-human domain to the human-robot domain not only makes the teamwork more efficient, but also enhances the experience for the participants, in terms of trusting the robot.
Cognitive Robotics and Human Robot Interaction
This approach to Human-Robot Interaction is derived via cognitive robotics, i.e., the understanding of how and why people act the way they do. >>More capable and intelligent robots and autonomous systems will require more human-like cognitive abilities. <<
The hypothesis is that robots and autonomous systems that use human-like representations, strategies, and knowledge will enable better collaboration and interaction with the people who use them. Similar representations and reasoning mechanisms make it easier for people to work with these autonomous systems. An autonomous system must be able to explain its decisions in a way that people understand, which should lead to better trust and acceptance of the system. If an autonomous system can predict a person’s needs, even in the very short term, it can prepare for it and act appropriately.
In this line of research, computational cognitive models are used to build process models of human cognitive skills, and those models are then used as as reasoning mechanisms on the robots and autonomous systems. Computational cognitive models of people are built, their perception, their memory, their attention, their reasoning, their spatial abilities, and their thinking. An embodied version of ACT-R (Anderson et al., 2007) is used that is referred to as ACT-R/E (Trafton et al., 2013). ACT-R (and ACT-R/E) are computational systems based on theories of how human reasoning works, and which capture known facts and constraints known regarding how the mind works, and connects well with psychological data (experiments) and neuroscience data (fMRI).
There are two primary scientific goals :
(I) To understand the embodied nature of cognition : how people work in the physical world.
(II) To improve human-robot interaction by high fidelity models of individuals so to provide some assistance to them. For example, models understand that people do not have perfect memories and can not see behind their head. This knowledge allows a new model to remind a person what they were doing if they forgot or to show them something in the environment they did not see.
Some of the cognitive models that have been developed and have been used in various research projects include :
Gaze following : The ability of an infant around the age of 18 months to follow objects in the environment (such as a toy).
Level 1 Perspective Taking : The ability to understand what another person is pointing at that developed around the age of two years.
Visual, Spatial Perspective taking via mental simulation : Around the age of 4-5 years of age, a child can mentally simulate how the world looks from someone else’s point of view.
Conversation tracking : Being able to follow several people engaged in conversation and knowing where to look when during conversations.
Teaming via model of one’s self : Allows deciding what a team mate will do based on modeling the team mate as one’s self.
Theory of Mind : The ability to infer the beliefs, desires and intentions of others, which develops around the age of 5 years of age.
The Challenges of Facial Masking in Parkinson’s Disease Rehabilitation through Co-Robot Mediator
Many aspects of co-robots are currently being investigated, from algorithms for scene and activity understanding, to planning for human-robot teaming, and natural language interactions between humans and robots. However, there is surprisingly little work on mechanisms that will allow co-robots to behave in a manner that is ethical and sensitive to the moral context and social norms. This is particularly worrisome as simple robots are already entering society without any notion of ethically acceptable behaviour and this situation will only be exacerbated in the future if various kinds of social and assistive robots will cause humans to form unidirectional emotional bonds with robots without those robots being sensitive to human emotions and feelings. Therefore investigations seek to tackle a hitherto completely overlooked ethical aspect of human-robot interaction, i.e., the maintenance of human dignity and the stigmatisation of human patients.
The overarching scientific goal is two-fold : (I) to develop a robotic architecture endowed with moral emotional control mechanisms, abstract moral reasoning, and a theory of mind that allow co-robots to be sensitive to human affective and ethical demands, and (II) to develop a specific instance of the architecture for a co-robot mediator between people with ‘facial masking’ due to Parkinson’s disease that reduces the ability to signal emotion, pain, personality and intentions to their family caregivers, and health care providers who often misinterpret the lack of emotional expressions as disinterest and an inability to adhere to treatment regimen, resulting in stigmatisation. Specific questions to address include :
(I) How can an expanded set of moral emotions, particularly empathy, be modeled and exhibited by co-robots to provide quantitatively better care of patients, in particular, early patients with Parkinson’s Disease?
(II) How can a theory of mind of both caregiver and patient be developed (including their goals and emotional states) and which can be used by a co-robot mediator so to improve the quality of care for patients while enhancing the dignity of both patient and caregiver?
To tackle these problems, is brought roboticists with expertise and extensive prior experience in robot ethics and modeling emotions as well as implementing them in integrated autonomous robotic systems. This combined with that of an expert knowledge of early Parkinson’d Disease rehabilitation and daily social life.
An understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviours for more efficient and friendly interaction. Advances in human-computer interaction technologies have been widely used in improving human-robot interaction. It is now possible to interact with robots via natural communication means such as speech.
The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.
Computing with emotions
The importance of emotional expression as part of human communication has been understood since the seventeenth century, and has been explored scientifically since Charles Darwin and others in the nineteenth century. Recent advances in Psychology have greatly improved understanding of the role of affect in communication, perception, decision making, attention and memory. At the same time, advances in technology mean that it is becoming possible for machines to sense, analyse and express emotions. One can now consider how these advances relate to each other and how they can be brought together to influence future research in perception, attention, learning, memory, communication, decision making and other applications.
It is useful therefore to survey recent advances in theories of emotion and affect, their embodiment in computational systems, the implications for general communications, and broader applications. The combination of new results in psychology with new techniques of computation on new technologies will enable new applications in commerce, education, entertainment, security, therapy and everyday life. However, there are important issues of privacy and personal expression that must also be considered. Probo, a robot assisted therapy platform Bram Vanderborght (Vrije Universiteit Brussel, Belgium) Probo is a social robot designed to focus on verbal and non-verbal communication. The robot was designed to act as a social interface by employing human-like social cues and communication modalities. It has a fully expressive and anthropomorphic head. With 20 motors in the head, the robot is able to express attention and emotions via its gaze and facial expressions. To guarantee a safe physical interaction between the robot and the children, compliant actuation systems and a layered structure with foam and fabric were implemented. These elements contribute to the aspects of safe, soft and huggable interaction. During the interactions, children have been very willing to touch the robot physically. A new version with actuated arms, hands, and body and able to gesture, adapt body posture and perform basic manipulation skills is currently under construction. Probo has been used with autistic children. Several studies with Probo have been performed, i.e., to test whether typically developed children are able to recognise the emotions of the robot,m as a social story telling agent for improving social skills in ASD children; to support children to recognise basic emotions, and to mediate social play skills of children with ASD with their sibling. The outcomes of these studies showed positive results. In all the studies, the robot was used in a ,Wizard of Oz setup. Future work is to include the evolving to more supervised autonomous interaction.
Knowing where and when people actually look can provide a great deal of information regarding focus of attention and other mental states during an interaction. Therefore, eye trackers can be a very powerful tool for social robotics research. In consequence has emerged extensive research in eye tracking hardware and software, with a focus on wearable, mobile systems, constituting real-time eye tracking data as sensor input for a robot/humanoid.
Secret Intelligence Service
Artificial Intelligence Primer (II)
The Future State Project
Adversitate. Custodi. Per Verum