Software and development tools
PC & Mobile technology
Safety
25.05.2023 13:00

Share with others:

Share

Scientists are researching to understand artificial intelligence

Researchers have developed an innovative method for evaluating how artificial intelligence (AI) understands data, improving transparency and trust in AI-based diagnostic and predictive tools.
Photo: Unsplash
Photo: Unsplash

This approach helps users understand the inner workings of the "black boxes" of AI algorithms, especially in medical use and in the context of the upcoming European Act on Artificial Intelligence.

A group of researchers from the University of Geneva (UNIGE), Geneva University Hospitals (HUG) and the National University of Singapore (NUS) have developed a new approach for assessing understanding of artificial intelligence technologies.

This breakthrough enables greater transparency and credibility of AI-powered diagnostic and predictive tools. The new method reveals the mysterious workings of the so-called "black boxes" of AI algorithms, helping users understand what influences the results that AI produces and whether they can be trusted.

This is especially important in scenarios that have a significant impact on human health and well-being, such as the use of AI in the medical environment. The research has particular significance in the context of the upcoming European Union Artificial Intelligence Act, which seeks to regulate the development and use of AI in the EU.

The findings were recently published in the journal Nature Machine Intelligence. Temporal types of data that represent the evolution of information over time are everywhere: for example, in medicine when recording heart activity with an electrocardiogram (ECG), in the study of earthquakes, tracking weather patterns, or in the economy for financial monitoring. of markets.

This data can be modeled using AI technologies to build diagnostic or predictive tools. The progress of AI and especially deep learning, which includes training a machine with the help of large amounts of data in order to interpret it and learn useful patterns, opens the way to increasingly accurate tools for diagnosis and prediction. Nevertheless, the lack of insight into how AI algorithms work and what influences their results raises important questions about the reliability of black box AI technology.

"The way these algorithms work is at best non-transparent," says Professor Christian Lovis, Director of the Department of Radiology and Medical Informatics at the UNIGE Faculty of Medicine and Head of the Department of Medical Information Science at HUG and one of the authors of the study on understanding AI.

"Of course, the stakes, especially the financial stakes, are extremely high. But how can we trust a machine without understanding the basis of its reasoning? These questions are crucial, especially in sectors such as medicine, where AI-driven decisions can affect people's health and even their lives, and in finance, where they can lead to huge capital losses. .«

Understanding methods strive to answer these questions by revealing why and how the AI came to a certain decision and the reasons for it. "Knowing the elements that tipped the scales in favor or against a solution in a certain situation, which allows at least some transparency, increases our confidence in the operation of the tools," says assistant professor Gianmarco Mengaldo, director of the MathEXLab laboratory at the Faculty of Design and Engineering of the National University of Singapore.

“However, current understanding methods that are widely used in practical applications and industrial circuits produce very different results, even when applied to the same task and data set. This raises an important question: Which method is correct if there is supposed to be a unique, correct answer? Is that why evaluating methods of understanding is becoming just as important as understanding itself?»

understanding artificial intelligence ChatGPT Google Bard

Differentiating between important and irrelevant information

Differentiating data is key to developing AI technologies that are fully understood. For example, when artificial intelligence analyzes images, it focuses on a few salient features.

Doctoral student in the laboratory of Professor Lovis and the first author of the AI study Hugues Turbé explains: “AI can, for example, distinguish between a picture of a dog and a picture of a cat. The same principle also applies to the analysis of time sequences: the machine must be able to select the elements on the basis of which it will think. In the case of ECG signals, this means harmonizing signals from different electrodes to assess possible fluctuations that would be a sign of a certain heart disease.»

Choosing a method of understanding among all those available for a particular purpose is not easy. Different AI methods often deliver very different results, even when applied to the same data set and task.

To tackle this challenge, the researchers developed two new evaluation methods to help understand how AI makes decisions: one to identify the most important parts of a signal and another to assess their relative importance to the final prediction.

For the comprehension assessment, they hid part of the data to see if it was relevant to AI decision-making. However, this approach sometimes caused errors in the results. To fix this, the machine was trained on an enlarged data set that included hidden data, which helped maintain balance and accuracy. The team then created two ways to measure the success of the understanding methods, which showed whether the AI was using the right data to make decisions and whether all the data was properly considered.

"Our method aims to evaluate the model that we will actually use in our operational area and thereby ensure its reliability," explains Hugues Turbé. To continue the research, the team developed a synthetic data set that is available to the scientific community to easily evaluate any new AI designed to interpret sequences in čas.

The future of artificial intelligence in medicine

The team plans to test their method in a clinical setting, where fear of AI is still widespread. "Building trust in the assessment of artificial intelligence is a key step towards acceptance in clinical settings," explains Dr. Mina Bjelogrlic, who leads the machine learning team in the Lovis division and is a co-author of this study. "Our research focuses on evaluating AI based on time sequences, but the same methodology could be applied to AI based on other types of data, such as image or text data. The goal is to ensure transparency, comprehensibility and reliability of AI for users.»

Understanding the inner workings of artificial intelligence is key to building trust in its use, especially in critical sectors such as medicine and finance. The research, conducted by a team of researchers from the University of Geneva and the National University of Singapore, offers an innovative method of assessing the understanding of artificial intelligence that helps users understand why and how decisions are made. This approach is especially important for medical applications, where decisions made by artificial intelligence can be life-saving.


Interested in more from this topic?
ChatGPT Google artificial intelligence


What are others reading?