News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
Why deep learning can be fire
Why deep learning can be fire
It can be seen that when we give more exercise data, the performance of all algorithms will be higher. But there is a difference between the various algorithms. On the traditional machine learning algorithm (black curve), at the beginning, its performance is also improving, but later, no matter how you add exercise data, its performance no longer improves. More data is also a waste of money. On the small neural network, its performance will increase with the amount of data from time to time, but the amplitude of the promotion is small, the size of the medium network is larger and the large network amplitude is larger.
The exercise data above is tagged data, that is, X and Y tags. Not all data, neural networks all know that, it only knows those labeled data (workable samples). Because what's going on at present is just monitoring learning neural network. In a small number of data sets that can be exercised, it is very likely that someone will finish the traditional machine learning algorithm better than your large neural network.
The progress of deep learning depends mainly on the increasing amount of exercise data and the increasing computing power (CPU and GPU). But in the past few years, we have begun to see the great algorithm innovation. Of course, many of them are designed to make the neural network work faster, which is equivalent to improving computing power. For example, a great break is to replace sigmoid function with relu function. The slope of some areas of the sigmoid function is simply zero, so that learning becomes slow and the slope of the relu function is constantly large. These new algorithms greatly shorten the cycle of neural network exercise, allowing us to exercise a larger neural network and apply more forged data in the application.
Another important reason for powerful computing power is that it enables you to express your ideas faster, so that you can get better ideas from time to time. For example, you have an idea of a neural network architecture, you have done your idea in code, then run it, stop exercising, then look at it, analyze the results, and then you fix the details of your ideas, then work again and then look at it again. That's the way to test and error constantly, so it is critical to carry out this trial and error cycle.
Therefore, the three key elements of deep learning and fire are data, computing power and algorithm. And the algorithm is being innovated from time to time. The exercise data is collected from time to time. The computing power CPU and GPU are strengthened from time to time. So I am confident that deep learning will become stronger and stronger.
The exercise data above is tagged data, that is, X and Y tags. Not all data, neural networks all know that, it only knows those labeled data (workable samples). Because what's going on at present is just monitoring learning neural network. In a small number of data sets that can be exercised, it is very likely that someone will finish the traditional machine learning algorithm better than your large neural network.
The progress of deep learning depends mainly on the increasing amount of exercise data and the increasing computing power (CPU and GPU). But in the past few years, we have begun to see the great algorithm innovation. Of course, many of them are designed to make the neural network work faster, which is equivalent to improving computing power. For example, a great break is to replace sigmoid function with relu function. The slope of some areas of the sigmoid function is simply zero, so that learning becomes slow and the slope of the relu function is constantly large. These new algorithms greatly shorten the cycle of neural network exercise, allowing us to exercise a larger neural network and apply more forged data in the application.
Another important reason for powerful computing power is that it enables you to express your ideas faster, so that you can get better ideas from time to time. For example, you have an idea of a neural network architecture, you have done your idea in code, then run it, stop exercising, then look at it, analyze the results, and then you fix the details of your ideas, then work again and then look at it again. That's the way to test and error constantly, so it is critical to carry out this trial and error cycle.
Therefore, the three key elements of deep learning and fire are data, computing power and algorithm. And the algorithm is being innovated from time to time. The exercise data is collected from time to time. The computing power CPU and GPU are strengthened from time to time. So I am confident that deep learning will become stronger and stronger.
PREVIOUS:Deep learning Bias theorem
NEXT:Image recognition project