Machine Learning Incorporates Human Error and Uncertainty
The computer world has changed a lot, and now we are into machine learning with a dash of artificial intelligence. However, computers are quite rigid. It cannot replicate human responses. Computers can only process the information fed into them.
Thus, in machine learning, humans mostly feed tons of filtered and verified information to ensure the data’s integrity and accuracy. Yet, the machines can only produce technical responses. They are still devoid of human characteristics.
New developments in machine learning
Researchers are now working to incorporate uncertaintyinto machine learning systems. The project is a collaboration of the University of Cambridge (UK), The Alan Turing Institute, Google DeepMind, and Princeton University. Among the researchers is 2023 Gonville & Caius College graduate Matthew Barker. He said they aim to bridge machine learning and behavioral research so that machine learning can begin to handle human uncertainty. They used established image classification datasets so humans could provide feedback and rate their uncertainty level while annotating particular images. In their study, the researchers learned the systems can handle uncertain feedback better when machines train with variable labels. However, they also learned that the overall performance of the hybrid system degrades quickly after receiving human feedback.
Even with the power of artificial intelligence, it still fails to grasp human error and uncertainty, especially in systems where humans provide feedback to a machine learning model, as most of these systems’ programming assumes that humans are always correct and certain. However, humans can make occasional mistakes and uncertainty when making decisions in the real world.
Barker added that data from years of behavioral research show that humans are not 100 percent certain most of the time. However, the concept is still a challenge to include in machine training. He likewise admitted that their research work raised more questions than answers. They are improving human-in-the-loop systems’ reliability and trustworthiness by rationalizing human behavior.
According to Barker, they are looking at the new machine learning developments’ application in fields where safety is crucial, such as in medical condition identification. When perfected, the ML hybrid system can help in reducing risk. Further, it can improve the reliability and confidence in these systems.
Katherine Collins and Matthew Barker, first author and co-author, respectively, explained that uncertainty is central to human reasoning about the world. However, several AI models miss considering it. Although many machine learning developers are working to address model uncertainty, there is less work on addressing uncertainty from the point of view of a human. Their research wants to see what happens when people are uncertain, which is vital in safety-critical settings.
Methodology
For this study, they utilized three benchmark machine learning datasets. They used one for classifying photos of birds, one for classifying chest X-rays, and a dataset for categorizing digits.
In the digits and chest X-rays, the researchers simulated uncertainty. For the bird datasets, they requested participants to rate their uncertainty level about the images they see, such as the bird’s color.
The project needs more work, as researchers identified many open challenges when incorporating humans into ML models. They will release their datasets so other researchers can further study the subject, and uncertainty can be incorporated into newer machine learning systems later.