The Paper reporter Zhang Jing
The Boltzmann machine developed by Hinton became an early example of generative modeling. The Boltzmann machine is often used as part of a larger network that can be used to recommend movies or TV series based on viewers' preferences.
Machine learning is different from traditional software, which works like a recipe. Traditional software receives the data, then processes it according to a clear description and produces the result, just like someone collects the ingredients and processes them according to a recipe. In contrast, in machine learning, a computer learns by example, allowing it to solve fuzzy and complex problems that cannot be managed with step-by-step instructions.
John · J. · Hopfield and Jeffrey · · Hinton won the 2024 Nobel Prize in Physics for "fundamental discoveries and inventions of machine learning through artificial neural networks".
On October 8, the 2024 Nobel Prize in Physics was unexpectedly awarded to the field of machine learning research, and after the results were announced, even the winner himself, Jeffrey ·. ·In a telephone interview with the Royal Academy of Sweden Sciences, Geoffrey E. Hinton also called "I didn't expect it." In 2024, John · of Princeton University John J. Hopfield · and Jeffrey Canada the University of Toronto· · Hinton was awarded the Nobel Prize in Physics for "fundamental discoveries and inventions of machine learning through artificial neural networks." The winners will share 11 million Sweden kronor (about 7.45 million yuan) in prize money. Why did the Nobel Prize in Physics go to machine learning? Machine learning has exploded in the last 15 to 20 years, using a structure called artificial neural networks. So when we talk about artificial intelligence, we are usually referring to machine learning using artificial neural networks. While computers can't think, machines can now simulate functions such as memory and learning. This is thanks to the creative work of this year's Nobel Laureate in Physics. This year's two Nobel laureates in physics, John · Hopfield and Jeffrey · Hinton, have been doing important work on artificial neural networks since the 80s of the 20th century. They used the tools of physics to develop methods that are the foundation of today's powerful machine learning. Hopfield created an associative memory that could store and reconstruct images and other types of data patterns. When given an incomplete or slightly distorted network pattern, Hopfield's method can find the most similar storage pattern. In 1980, Hopfield left his position at Princeton University, and his research interests took him out of the field where his fellow physics colleagues worked. He came to Caltech as a professor of chemistry and biology. There, he can use computer resources to conduct free experiments and develop his ideas on neural networks. But he did not abandon his foundation in physics. Magnetic materials allow each atom to become a tiny magnet due to the spin of the atoms, and the spins of neighboring atoms affect each other. Benefiting from his understanding of magnetic materials, Hopfield built a network of models with nodes and connections using the principles of physics that describe how materials develop when spins interact with each other. Hopfield et al. went on to develop the details of the functioning of the Hopfield network, such as nodes that could store any value, not just 0 or 1. If you think of nodes as pixels in a picture, they can have different colors, not just black or white. The improved method makes it possible to save more pictures and distinguish them even if they are very similar.
The picture is from the official website of the Nobel Prize.
It's one thing to remember an image, but it takes more than one thing to interpret what it depicts. Having studied experimental psychology and artificial intelligence in England and Scotland, Hinton wondered if machines could learn to process patterns, classify and interpret information in the same way that humans do. When Hopfield published his article on associative memory in 1982, Hinton was working at Carnegie Mellon University. Hinton uses Hopfield's invention of the network as the basis for a new network that uses another method, the Boltzmann machine, which can learn to recognize feature elements in a given type of data. This method was published in 1985. Hinton used the tools of statistical physics to train the machine by feeding it with cases. Instead of learning from instructions, the Boltzmann machine learns from a given example, classifying an image, or creating a new case for the type of pattern it is trained on. The Boltzmann machine updates the value of one node at a time, and eventually the machine enters a state in which the mode of the node can change, but the properties of the entire network remain the same. According to the Boltzmann equation, each possible model has a specific probability, which is determined by the energy of the network. When the machine stopped, it created a new pattern, which made the Boltzmann machine an early example of a generative model.
John · J. · Hopfield and Jeffrey · · Hinton won the 2024 Nobel Prize in Physics for "fundamental discoveries and inventions of machine learning through artificial neural networks".
In the 90s of the 20th century, many researchers lost interest in artificial neural networks, but Hinton was one of the scientists who continued to explore the field, and he also helped kick-start the current explosive development of machine learning on the basis of this work. In 2006, he and his colleagues developed a method for pre-training networks consisting of a series of hierarchical Boltzmann machines. This pre-training provides a better starting point for connections in the network, which optimizes the training to recognize image elements. The Boltzmann machine is often used as part of a larger network that can be used to recommend movies or TV series based on viewers' preferences. It is worth mentioning that Hinton was awarded the 2018 Turing Award for his contributions to deep learning, along with Joshua · Bencio and Yang Likun. Today's artificial neural networks are often huge and consist of more layers. They are called deep neural networks, and the way they are trained is called deep learning. Artificial intelligence is increasingly penetrating into all walks of life and aiding scientific research. The official evaluation of the Nobel Prize says that physics provides tools for the development of machine learning, and it will be interesting to see how physics as a field of study can also benefit from artificial neural networks.
This issue is edited by Xing Tan