News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
Several Paradoxes of AI Artificial Intelligence
Several Paradoxes of AI Artificial Intelligence
In recent years, the rise of artificial intelligence, especially since AlphaGo Zero does not need to know the experience of human Go, self-game 40 days to dominate the world, once again caused people to think and question about computing and intelligence, at least three issues face conflicting requests, We have become the three paradoxes of artificial intelligence.
Moravik paradox. Moravik and other scholars have discovered that it takes only a very small amount of computational talent to complete the unique high-level intelligence of human beings, but the ability to complete unrecognized skills and perceptual needs is enormous, that is, "difficult problems are easy to solve." The simple problem is difficult to understand." This paradox may reflect the limitations of the Turing machine model, and the need to propose a new model that is more suitable for perceptual computing. In practice, Turing's ground-breaking paper also defines C-machines (selectors) and u-machines (unorganized machines) as models for depicting ideas. However, after 1960, the Turing machine gradually evolved from clarifying the uncalculability to a model that explained the computability, and people have gradually forgotten the limitations of the Turing machine model.
The current prevailing computers are based on the von Neumann architecture of the Turing machine model. Von Neumann discovered that the analog neural network design computer is not going to work. From the beginning of the first electronic computer, the computer is developed and imitated. The brains go their own way, making the way of using computer to complete artificial intelligence and the brain's ideological mechanism are simply not in contact. Nowadays, a penny can buy 10,000 transistors on an integrated circuit. Integrated circuits and software have accumulated an incalculable material wealth, which constitutes a grand inertia. Carrying out artificial intelligence must not only consider the grand inertia of the computer industry, but also the view to break the limitations of the Turing machine model. This is the first dilemma we face.
New learning paradox. It is often said that big data and machine learning are discovering new knowledge from the data. AlphaGo Zero has also clarified that doing statistical learning on a computer can learn the knowledge of Go that humans have not yet controlled. But scholars engaged in computer science research think that the computer is a mechanical, repeatable intelligent machine, which is essentially inventive. The operation of the computer can be attributed to the transformation of existing symbols. The conclusions were included in the premise, which did not produce new knowledge, and did not promote human understanding of the objective world. (In the CNCC Forum, Lin Huimin talked about it. Such an idea). Are the learnings learned by machine learning included in the software before the operation? How does mechanical, repeatable calculations ultimately lead to new learning? These studies are only limited to the extent of "knowing why they don't know why?" This is also a disturbing issue.
Revelation paradox. Revelation search is the most fundamental technology of artificial intelligence. Similar to the “best effort” standard of the Internet, apocalyptic search cannot guarantee the accuracy of finding solutions or guaranteeing solutions. This issue of the translation "The real risk of artificial intelligence" gives a warning: the use of apocalyptic algorithms to invent a device with intelligent illusion will bring some risks that we can not afford. Professor Qiu Chengtong also pointed out at the CNCC conference that the theoretical basis of artificial intelligence is very thin, requiring a theory that can be proved as the basis. However, most of the artificial intelligence disposal is NP-hard problem, and it is likely that there is no accurate polynomial algorithm. Once an exact formula similar to F=ma is found, this problem is not an artificial intelligence problem. We must pay great attention to the risks of the apocalyptic algorithm, but it is not appropriate to use traditional engineering science to request artificial intelligence, and the demand is different.
After 60 years of cultivation, artificial intelligence has grown into a tree that has nothing to gain. We can't just sway this big tree desperately, pick up some sporadic fruits on the ground, but embrace the seriousness and humility of the unknown, a variety of trees. Xin Miao, dealing with these paradoxes.
Moravik paradox. Moravik and other scholars have discovered that it takes only a very small amount of computational talent to complete the unique high-level intelligence of human beings, but the ability to complete unrecognized skills and perceptual needs is enormous, that is, "difficult problems are easy to solve." The simple problem is difficult to understand." This paradox may reflect the limitations of the Turing machine model, and the need to propose a new model that is more suitable for perceptual computing. In practice, Turing's ground-breaking paper also defines C-machines (selectors) and u-machines (unorganized machines) as models for depicting ideas. However, after 1960, the Turing machine gradually evolved from clarifying the uncalculability to a model that explained the computability, and people have gradually forgotten the limitations of the Turing machine model.
The current prevailing computers are based on the von Neumann architecture of the Turing machine model. Von Neumann discovered that the analog neural network design computer is not going to work. From the beginning of the first electronic computer, the computer is developed and imitated. The brains go their own way, making the way of using computer to complete artificial intelligence and the brain's ideological mechanism are simply not in contact. Nowadays, a penny can buy 10,000 transistors on an integrated circuit. Integrated circuits and software have accumulated an incalculable material wealth, which constitutes a grand inertia. Carrying out artificial intelligence must not only consider the grand inertia of the computer industry, but also the view to break the limitations of the Turing machine model. This is the first dilemma we face.
New learning paradox. It is often said that big data and machine learning are discovering new knowledge from the data. AlphaGo Zero has also clarified that doing statistical learning on a computer can learn the knowledge of Go that humans have not yet controlled. But scholars engaged in computer science research think that the computer is a mechanical, repeatable intelligent machine, which is essentially inventive. The operation of the computer can be attributed to the transformation of existing symbols. The conclusions were included in the premise, which did not produce new knowledge, and did not promote human understanding of the objective world. (In the CNCC Forum, Lin Huimin talked about it. Such an idea). Are the learnings learned by machine learning included in the software before the operation? How does mechanical, repeatable calculations ultimately lead to new learning? These studies are only limited to the extent of "knowing why they don't know why?" This is also a disturbing issue.
Revelation paradox. Revelation search is the most fundamental technology of artificial intelligence. Similar to the “best effort” standard of the Internet, apocalyptic search cannot guarantee the accuracy of finding solutions or guaranteeing solutions. This issue of the translation "The real risk of artificial intelligence" gives a warning: the use of apocalyptic algorithms to invent a device with intelligent illusion will bring some risks that we can not afford. Professor Qiu Chengtong also pointed out at the CNCC conference that the theoretical basis of artificial intelligence is very thin, requiring a theory that can be proved as the basis. However, most of the artificial intelligence disposal is NP-hard problem, and it is likely that there is no accurate polynomial algorithm. Once an exact formula similar to F=ma is found, this problem is not an artificial intelligence problem. We must pay great attention to the risks of the apocalyptic algorithm, but it is not appropriate to use traditional engineering science to request artificial intelligence, and the demand is different.
After 60 years of cultivation, artificial intelligence has grown into a tree that has nothing to gain. We can't just sway this big tree desperately, pick up some sporadic fruits on the ground, but embrace the seriousness and humility of the unknown, a variety of trees. Xin Miao, dealing with these paradoxes.