News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
A brief discussion on deep learning in AI
A brief discussion on deep learning in AI
Artificial intelligence is very hot nowadays, and various news organizations are releasing material from time to time. Some say that IBM's Waston artificial intelligence can completely replace the laborers. Others say that today's algorithms can beat doctors in the medical field. It is. Every day, there are new artificial intelligence startups, each claiming that they are applying machine learning and completely overthrowing your personal life. These are commercial activities.
There are also some products that people are used to, such as juicers, wireless routers, have also replaced a new slogan overnight: "We are all supported by artificial intelligence technology!" Smart table can not only know The right height for your work in the day, so you can also help you order lunch!
But what is the truth? Those reporters who reported on the news may not have personally intervened in a neural network exercise process, and the news startups and marketing teams also have their own calculations: they want to expand their reputation and gain the attention of capital and talents. Even if they don't deal with a problem that exists in the ideal.
It is also in such an embarrassing atmosphere that it is no wonder that there are so many Bogutongs in the field of artificial intelligence. In fact, we have not completely figured out what AI can do and what AI can't do.
Deep learning is indeed a thrilling technology, which is irrefutable.
In fact, the concept of neural network has been presented since the 1960s. It is only because of the recent leap in big data and computer performance that it has become really useful, and it has also derived a kind of "deep learning." The specialty is designed to apply complex neural network architectures to data modeling, ultimately leading to unprecedented precision.
The results of today's technology development are indeed impressive. Computers now recognize what's in pictures and videos, and they can turn voice into text, and their efficiency has transcended human resources. Google has also added a neural network to Google Translate, and today's machine learning has gradually approached human translation at the level of translation.
Some of the ideal applications are also eye-opening. For example, computers can predict farmland crop yields that are more accurate than the US Department of Agriculture; machines can more accurately diagnose cancer, and their accuracy is better than that of many years. The old doctor is even taller. John Lauchbury, a DARPA (US Department of Defense Advanced Research Programs Agency), described three waves in the field of artificial intelligence:
1. The library of knowledge, or similar to the "deep blue" and Waston expert systems developed by IBM.
2. Data learning, which includes machine learning and deep learning.
3, the situation is compliant, which touches the application of a small amount of data, in the ideal life to build a solid, explanatory model, just as humans can achieve the same level
As far as the second wave is concerned, the current research on deep learning algorithms has stalled well, in the words of Launchbury, because of the "manifold assumption".
But there are some thorny issues in deep learning.
At an recent AI conference in the Bay Area, Google's artificial intelligence researcher Francois Chollet emphasized the importance of deep learning, which is more advanced than ordinary data statistics and machine learning methods. It is a very powerful one. Formal differentiation tool. However, it cannot be admitted that it has serious limitations, at least for now.
The result of deep learning is based on extremely harsh preconditions.
Whether it is "supervised perception" or "reinforcement learning", they all require a large amount of data to stop supporting, and they perform very poorly in the early scenarios, and can only do some of the simplest and direct Form recognition work.
In contrast, people can learn valuable information from a very small number of examples, and are good at long-term solutions, build a general model for a situation, and apply such a model. Do a summary of the summary at the highest point.
In fact, the most common thing that a passer-by can walk on the street is difficult to find about deep learning algorithms. Let's take an example: Now let's say we want the machine to learn how to get stuck in the car when it's on the road.
If you are using the "monitoring learning path", then you need to extract a large amount of data from the situation of car driving, and also stop sorting and picking with the clearly labeled "action tag", such as "suspend", "stay", etc. Wait. Next, you also need to train a neural network so that it can build a causal connection between the situation at hand and the corresponding action.
If you are using the "enhanced learning approach", then you should give the algorithm a purpose to independently determine what the current optimal solution (that is, the best action) is, the computer is in a different context, in order to complete To prevent this action of the crash, it is estimated to be down on the machine thousands of times.
Choliet concludes: "You can't accomplish some kind of intelligence in the ordinary sense with the technological developments of today and today."
And people are different, you need to inform him once: you need to avoid the car. Then our brain has the ability to extract experiences from a few examples, and to have the horrible scenes in the brain that are supposed to be crushed by the car (called “modeling” at the computer), in order to prevent loss of life or lack of arms. With fewer legs, most people can quickly learn the essentials that are not hit by cars.
Although there has been a big pause now, some neural networks can give a striking result from a data level in a large sample size, but if they come out alone, they are not reliable, they have committed The mistake is that people can't make a lifetime, for example, the toothbrush is used as a basket.
The instability of data quality is: unreliable, inaccurate, and unfair.
Moreover, your results depend on how well the data is entered. In the neural network, if the input data is inaccurate and not perfect, then the result will be wrong and outrageous. Some time points will not only form a loss, but it will be very difficult. For example, Google Pictures mistakes African-Americans as orangutans; Microsoft tried to put an artificial intelligence on Twitter to stop learning. After a few hours, it became full of swearing and swearing. Serious racial discrimination.
Perhaps this example on Twitter is extreme, but it is not acknowledging that the data we enter has some level of prejudice and discrimination. This kind of objective, imperceptible concept or suggestion, sometimes we can't even detect it. . For example, word2vec is an open source tool for word embedding launched by Google, which extracts 3 million words from GoogleNews. The information transmitted by this group of data includes “Dad is a doctor, mother is a nurse.” This obviously has gender discrimination.
This kind of discrimination is not only carried out to the digital world, but also amplified. If the word "doctor" points more to "man" than "woman," then the algorithm will prioritize men in front of women in the face of a public doctor's job.
In addition to being inaccurate and unfair, there is still the biggest risk: not safe.
Ian Goodfellow, the creator of "Generation of Opposite Networks" (GAN), reminds us that today's neural networks can easily be dominated by unscrupulous people. They can tamper with the picture in a way that the human eye can't recognize, and let the machine mistakenly recognize the picture.
On the left is the panda (the machine is definitely 57.7%). After adding the picture in the middle, the machine really rises to 99.3%, thinking that the picture is a gibbons.
Don't underestimate the risk. This kind of tampering with the artificial intelligence system will bring great harm. In particular, the falsified picture and the original picture are completely different in our view. For example, no one driving a car will be embarrassed.
The above is the bottleneck of deep learning. At present, the preconditions required for its function are too harsh. The input data has a decisive influence on its final result. In addition, it has many flaws and security. Nor can it be guaranteed. If we are going to the ideal of artificial intelligence in the future, these bottlenecks have yet to be further broken and challenged by people.
There are also some products that people are used to, such as juicers, wireless routers, have also replaced a new slogan overnight: "We are all supported by artificial intelligence technology!" Smart table can not only know The right height for your work in the day, so you can also help you order lunch!
But what is the truth? Those reporters who reported on the news may not have personally intervened in a neural network exercise process, and the news startups and marketing teams also have their own calculations: they want to expand their reputation and gain the attention of capital and talents. Even if they don't deal with a problem that exists in the ideal.
It is also in such an embarrassing atmosphere that it is no wonder that there are so many Bogutongs in the field of artificial intelligence. In fact, we have not completely figured out what AI can do and what AI can't do.
Deep learning is indeed a thrilling technology, which is irrefutable.
In fact, the concept of neural network has been presented since the 1960s. It is only because of the recent leap in big data and computer performance that it has become really useful, and it has also derived a kind of "deep learning." The specialty is designed to apply complex neural network architectures to data modeling, ultimately leading to unprecedented precision.
The results of today's technology development are indeed impressive. Computers now recognize what's in pictures and videos, and they can turn voice into text, and their efficiency has transcended human resources. Google has also added a neural network to Google Translate, and today's machine learning has gradually approached human translation at the level of translation.
Some of the ideal applications are also eye-opening. For example, computers can predict farmland crop yields that are more accurate than the US Department of Agriculture; machines can more accurately diagnose cancer, and their accuracy is better than that of many years. The old doctor is even taller. John Lauchbury, a DARPA (US Department of Defense Advanced Research Programs Agency), described three waves in the field of artificial intelligence:
1. The library of knowledge, or similar to the "deep blue" and Waston expert systems developed by IBM.
2. Data learning, which includes machine learning and deep learning.
3, the situation is compliant, which touches the application of a small amount of data, in the ideal life to build a solid, explanatory model, just as humans can achieve the same level
As far as the second wave is concerned, the current research on deep learning algorithms has stalled well, in the words of Launchbury, because of the "manifold assumption".
But there are some thorny issues in deep learning.
At an recent AI conference in the Bay Area, Google's artificial intelligence researcher Francois Chollet emphasized the importance of deep learning, which is more advanced than ordinary data statistics and machine learning methods. It is a very powerful one. Formal differentiation tool. However, it cannot be admitted that it has serious limitations, at least for now.
The result of deep learning is based on extremely harsh preconditions.
Whether it is "supervised perception" or "reinforcement learning", they all require a large amount of data to stop supporting, and they perform very poorly in the early scenarios, and can only do some of the simplest and direct Form recognition work.
In contrast, people can learn valuable information from a very small number of examples, and are good at long-term solutions, build a general model for a situation, and apply such a model. Do a summary of the summary at the highest point.
In fact, the most common thing that a passer-by can walk on the street is difficult to find about deep learning algorithms. Let's take an example: Now let's say we want the machine to learn how to get stuck in the car when it's on the road.
If you are using the "monitoring learning path", then you need to extract a large amount of data from the situation of car driving, and also stop sorting and picking with the clearly labeled "action tag", such as "suspend", "stay", etc. Wait. Next, you also need to train a neural network so that it can build a causal connection between the situation at hand and the corresponding action.
If you are using the "enhanced learning approach", then you should give the algorithm a purpose to independently determine what the current optimal solution (that is, the best action) is, the computer is in a different context, in order to complete To prevent this action of the crash, it is estimated to be down on the machine thousands of times.
Choliet concludes: "You can't accomplish some kind of intelligence in the ordinary sense with the technological developments of today and today."
And people are different, you need to inform him once: you need to avoid the car. Then our brain has the ability to extract experiences from a few examples, and to have the horrible scenes in the brain that are supposed to be crushed by the car (called “modeling” at the computer), in order to prevent loss of life or lack of arms. With fewer legs, most people can quickly learn the essentials that are not hit by cars.
Although there has been a big pause now, some neural networks can give a striking result from a data level in a large sample size, but if they come out alone, they are not reliable, they have committed The mistake is that people can't make a lifetime, for example, the toothbrush is used as a basket.
The instability of data quality is: unreliable, inaccurate, and unfair.
Moreover, your results depend on how well the data is entered. In the neural network, if the input data is inaccurate and not perfect, then the result will be wrong and outrageous. Some time points will not only form a loss, but it will be very difficult. For example, Google Pictures mistakes African-Americans as orangutans; Microsoft tried to put an artificial intelligence on Twitter to stop learning. After a few hours, it became full of swearing and swearing. Serious racial discrimination.
Perhaps this example on Twitter is extreme, but it is not acknowledging that the data we enter has some level of prejudice and discrimination. This kind of objective, imperceptible concept or suggestion, sometimes we can't even detect it. . For example, word2vec is an open source tool for word embedding launched by Google, which extracts 3 million words from GoogleNews. The information transmitted by this group of data includes “Dad is a doctor, mother is a nurse.” This obviously has gender discrimination.
This kind of discrimination is not only carried out to the digital world, but also amplified. If the word "doctor" points more to "man" than "woman," then the algorithm will prioritize men in front of women in the face of a public doctor's job.
In addition to being inaccurate and unfair, there is still the biggest risk: not safe.
Ian Goodfellow, the creator of "Generation of Opposite Networks" (GAN), reminds us that today's neural networks can easily be dominated by unscrupulous people. They can tamper with the picture in a way that the human eye can't recognize, and let the machine mistakenly recognize the picture.
On the left is the panda (the machine is definitely 57.7%). After adding the picture in the middle, the machine really rises to 99.3%, thinking that the picture is a gibbons.
Don't underestimate the risk. This kind of tampering with the artificial intelligence system will bring great harm. In particular, the falsified picture and the original picture are completely different in our view. For example, no one driving a car will be embarrassed.
The above is the bottleneck of deep learning. At present, the preconditions required for its function are too harsh. The input data has a decisive influence on its final result. In addition, it has many flaws and security. Nor can it be guaranteed. If we are going to the ideal of artificial intelligence in the future, these bottlenecks have yet to be further broken and challenged by people.