laitimes

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence Nick

author:BM Xiaowei
A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence Nick

Taken from: KDnuggets News

Author: Francesco Corea, Decision Scientist and Data Analyst

Translation: Wang Weiling

Proofreading: Ye Gongbing

This article has a total of 3400 words, and it is recommended to read for 6 minutes

This article briefly introduces the development process of artificial intelligence in the past 60 years, and through this article, we can review the historical knowledge of artificial intelligence.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence Nick

I. Origin

Artificial intelligence is very popular now, but in fact, artificial intelligence is not a new field of research, it was born in the 50s of the last century, if you exclude the path of pure philosophical reasoning from ancient Greece to Hobbes, Leibniz, Pascal, research in the field of artificial intelligence officially began in 1956 at a conference held at Dartmouth College, when the most famous experts gathered to brainstorm ideas and discuss intelligent simulations.

The conference took place just a few years after Asimov proposed the three laws of robotics and, more precisely, only a few years after Turing's famous 1950 paper, in which he first proposed the concept of thinking machines, as well as the more accepted Turing test, to assess whether such machines actually exhibit intelligent characteristics.

As the Dartmouth College research team publicly released the content and ideas generated during the summer conference, it attracted some government funding to support innovative research in the field of non-biological intelligence.

2. Phantoms

At the time, AI seemed easy to implement, but it wasn't. At the end of the 60s of the 20th century, researchers realized that artificial intelligence was indeed a difficult field to study, and the funding that initially supported the concept of artificial intelligence began to decrease.

This phenomenon is reflected in the history of AI development, often referred to as the "AI effect", which consists of two parts:

There is always a promise that true AI will arrive in the next decade;

Whenever AI solves a task that requires human intelligence, people say that it is not done with intelligence at all and does not represent intelligence. As a result, the definition of intelligence is constantly being reconstructed.

In the United States, DARPA's main reason for funding AI research is to create perfect machine translations, but two successive events undermined that idea and brought about what is known as the first AI winter.

In fact, the American Advisory Committee on Automatic Language Processing (ALPAC) reported in 1966, and the subsequent Bright Hill Report (1973) also evaluated the feasibility of artificial intelligence, analyzed the development at that time, and concluded that artificial intelligence did not have the possibility of creating machines that could learn human intelligence.

The two reports were drafted against the backdrop of limited data input for algorithms and limited computing power for machines, leading to a decade of stagnation in research across the field of artificial intelligence.

3. Attacks by expert systems

Although in the 80s of the 20th century, due to the introduction of the concept of expert systems, there was a new round of funding support for AI research in the United Kingdom and Japan, but this basically belongs to the category of narrow artificial intelligence as defined by previous papers ().

The fact that these programs can only simulate the skills of human experts in a particular field is enough to spark new funding trends. The most active in these years was the Japanese government, which intends to create fifth-generation computers, which indirectly forced the United States and Great Britain to resume funding for AI research.

However, this golden age did not last long, and when investment goals failed to be achieved, new crises arose. In 1987, the performance of the personal computer surpassed that of the Lisp machine, the result of years of artificial intelligence research. This has sparked a second AI winter, with the US Defense Advanced Research Projects Agency explicitly opposing AI research and its funding.

Fourth, the return of artificial intelligence

Thankfully, the winter ended in 1993. At the time, MIT's COG project, which used dynamic analysis and planning tools to build humanoid robots, accounted for all of the U.S. government's funding for AI since 1950. In 1997, Deep Blue defeated chess player Kasparov to bring artificial intelligence back to the top.

A lot of research has been done in academia over the past two decades, but it is only recently that AI has been recognized as a paradigm shift. Of course, there are many reasons why so much money is being invested in AI right now, but we believe that there is one specific event that has had a decisive impact on the development trend of AI over the past five years.

As we can see from the chart below, AI has made great strides, but it was not widely accepted before the end of 2012. This graph is drawn using CBInsightsTrends, which mainly uses artificial intelligence and machine learning as keywords to plot trends.

A Brief History of Artificial Intelligence - A Brief History of Artificial Intelligence Nick

Figure 2012-2016 Artificial Intelligence Development Trends

More specifically, I've drawn a line the date that really sparked a new wave of good things in AI, and that was December 4, 2012. That Tuesday, a team of researchers presented the details of a convolutional neural network at the Neural Information Processing Systems (NIPS) conference, which had won them first place in the ImageNet classification competition a few weeks earlier. Their work increased the accuracy of classification algorithms from 72% to 85%, laying the groundwork for the adoption of neural networks as the foundation for artificial intelligence.

In less than two years, the ImageNet contest has achieved 96% accuracy in the classification section, slightly higher than the 95% accuracy rate for humans.

The dotted lines in this chart also show three important growth trends in the development of artificial intelligence, which mainly outline three major events:

DeepMind is an artificial intelligence company founded three years ago and acquired by Google in January 2014;

In February 2015, an open letter signed by more than 8,000 people from the School of Future Life and reinforcement learning research materials published by DeepMind (Mnih et al., 2015);

A paper published in Nature in March 2016 by DeepMind neural network scientists (Silver et al., 2016), and AlphaGo's impressive victory over Lee Sedol in March (for a series of impressive results that followed, see Ed Newton-Rex's article).

5. Future prospects

Artificial intelligence is inherently very dependent on financial support, as it is a research field that requires long-term investment and consumes a lot of manpower and material resources.

More worryingly, we may be at the next peak (Dhar, 2016), but this is also destined to stop soon.

However, like everyone else, I find this new era different in three ways:

When it comes to big data, we have a lot of data that needs to be fed into algorithms;

In terms of technological progress, with the development of storage capacity, computing power, algorithm understanding, better and faster bandwidth, and lower technical costs, we can really build models that can extract the required information;

Business models such as Uber and Airbnb have been launched to optimize resource allocation and improve efficiency, fully demonstrating the technical charm of cloud services (such as Amazon Web Services) and parallel computing run by graphics processing units (GPUs).

Read on