laitimes

Research such as the University of Cambridge has found that stable and accurate deep neural networks do not actually exist in theory

Today, deep neural networks are becoming more and more widely used, helping to design microchips, predict protein folding, and outperform humans in complex games. But there is also plenty of evidence that they are often unstable. A very obvious manifestation is that small changes in the data received by deep neural networks can lead to huge changes in the results.

For example, as revealed in the study One pixel attack for fooling deep neural networks, changing one pixel on an image, AI recognizes the horse as a frog. Samuel Finlayson, a computer scientist and biomedical informatician at Harvard Medical School, has also found that medical images can be modified in ways that are imperceptible to the human eye, leading to ai-state intelligence misdiagnosing cancer 100 percent.

In previous studies, there was mathematical evidence that stable, accurate neural networks existed to solve a variety of problems. However, researchers at the Universities of Cambridge and Oslo recently found that these AI systems may be stable and accurate only in limited circumstances. Neural networks that theoretically exist with stability and accuracy may not accurately describe what might happen in reality.

"Theoretically, neural networks have very few limitations." Mathian Matthew Colbrook of the University of Cambridge in the United Kingdom said. However, when trying to compute these neural networks, the problem arises.

"Digital computers can only compute certain specific neural networks," says Vegard Antun, a mathematician at the University of Oslo in Norway, "and sometimes it is impossible to compute an ideal neural network." ”

Such a statement may sound confusing, and IEEE Spectrum, in talking about the study, uses the cake analogy, "as if someone said there might be a cake, but there was no recipe for making it." ”

"We would say the problem is not the recipe. Instead, the problem is the tools necessary to make a cake. "We say there may be a recipe for the cake, but no matter what blender you have, you may not be able to make the cake you want." Also, when you try to make a cake with a blending mechanism in the kitchen, you get a completely different cake. ”

Moving on to the analogy, "You can't tell if the cake is incorrect even before you try it, and then it's too late." "However, in some cases, your blender is enough to make the cake you want, or at least a good approximation to that cake." ”

These new findings about the limitations of neural networks echo previous research by mathematician Kurt G del and computer scientist Alan Turing on the limitations of computation. Roughly speaking, they reveal that "some mathematical statements can never be proven or refuted, and there are some basic computational problems that computers cannot solve." Antun said.

Research such as the University of Cambridge has found that stable and accurate deep neural networks do not actually exist in theory

The study, titled "The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale's 18th problem," was published March 16 in the journal Proceedings of the National Academy of Sciences.

In artificial neural networks, components called "neurons" are fed into data and collaboratively solve problems, such as recognizing images. The neural network iteratively adjusts the connections between the individual neurons and sees if the resulting patterns of behavior can better find a solution. Over time, the network discovers which patterns are best for the results of the calculation. It then takes these as defaults, mimicking the learning process in the human brain. If a neural network has multiple layers of neurons, it is called "depth."

In previous studies, there was mathematical evidence that stable, accurate neural networks existed to solve a variety of problems. In this new study, however, researchers have now found that while theoretically there may be stable, accurate neural networks to solve many problems, paradoxically, there may actually be no algorithm that can successfully compute them.

The new study found that no matter how much data an algorithm can access or the accuracy of that data, it may not be able to compute a stable, accurate neural network for a given problem. Hansen says this is similar to Turing's argument that computers may not be able to solve some problems regardless of computing power and runtime.

"There are inherent limitations to what a computer can do, and those limitations also appear in AI," Colbrook says, "which means that neural networks with good characteristics that theoretically exist may not accurately describe what might happen in reality." ”

These new findings don't suggest that all neural networks are completely flawed, but they may be stable and accurate only in limited circumstances. "In some cases, stable and accurate neural networks can be computed," Antun says, "and the key problem is the 'in some cases' part, and the biggest problem is finding these cases." At the moment, little is known about how to do this. ”

Researchers have found that there are often trade-offs between the stability and accuracy of neural networks. "The problem is that we want stability and accuracy at the same time," Hansen says, "in practice, for critical safety-related applications, people may have to sacrifice some accuracy to ensure stability." ”

As part of the new study, the researchers developed their Rapid Iterative Restart Network (FIRENET) to achieve when it comes to tasks such as analyzing medical images, where neural networks can provide stable and accurate results.

The researchers argue that these new findings about the limitations of neural networks are not meant to inhibit AI research, "and figuring out what can and can't be done is healthy for AI research in the long run." Note that the negative results of Turing and G del triggered dramatic changes in the mathematical underpinnings and computer science aspects, which led to much of the development of modern computer science and modern logic, respectively. Colbrook said,

Specifically in this study, the researchers believe that these new findings imply the existence of a taxonomic theory that can describe which stable neural networks with a given precision can be computed by algorithms. To use the cake analogy mentioned earlier, "This would be a classification theory that describes which cakes can be baked with a blender that might be physically designed." If it's not possible to bake a cake, we also want to know how close it is to the type of cake we want. Antun said.

Read on