Humans and Automation
The world we live in is becoming increasingly digitalised. This creates opportunities for the development of safer and more efficient work environments and practices, and at the same time it challenges our previous understanding of the role of the human in advanced technological systems. The Humans and Automation (HA) department at IFE is working to change this.
HA is a department within the IFE Digital Systems sector, bringing together a multi-disciplinary team with experience in human-automation interaction, cognitive and experimental psychology, machine learning and human factors. Together we work to support the design of digital interfaces and to develop intelligent user monitoring systems that evaluate the cognitive and behavioural state of the human using those interfaces.
Our focus is on the analysis and optimisation of human performance in highly digital and complex environments, with the ultimate goals of improving the performance and safety.
We work in four key areas:
We investigate how humans use and respond to digital and Artificial Intelligence (AI) systems in a range of operating environments, from power plant control rooms to the driver’s seat of your car. Using our knowledge and experience of how different levels of automation can affect human performance, we develop design recommendations to optimise the human-automation interaction.
We develop smart monitoring systems that analyse biometric data using computer vision and machine learning to detect performance impairments in the human such as fatigue, distraction, stress and other cognitive and emotional factors. When working with automated systems, it is equally as important for the machine to understand the state of the human as well as for the human to understand the state of the machine. The data we collect and analyse can be used to develop human-automation systems that are truly transparent and collaborative, enabling optimal performance and safety.
How humans feel about advanced technologies can play a significant role in how these technologies are used or even mis-used. If people do not understand or are uncomfortable with advanced technologies, then they are less likely to use them in the expected way which can reduce the designed safety and performance of those technologies. In addition to developing and analysing digital and AI technologies, we use our experience and knowledge of human performance and cognitive psychology to investigate the related ethical issues which can affect situation awareness and trust in these situations.
Machine learning algorithms are often considered to be a “black box” – we know that they work, but we do not know how they work and therefore we do not fully understand the benefits and, perhaps more importantly, the limitations of the AI. As they more complex, the inner workings of these algorithms become even more opaque, to the point that even the engineer developing the algorithm cannot fully explain why it works as it does. AI transparency is not only important for the human user, but also for those responsible for regulating how and when it should be used. Using technology developed in our Biometrics lab, we are developing a virtual environment within which algorithms can be tested to determine how they work and when they fail.