News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
Ten kinds of deep learning methods that need to be understood at this stage
Ten kinds of deep learning methods that need to be understood at this stage
Whether AI or other subjects, studying the history of disciplines from time to time in the process of learning and deliberation, summarizing the current situation of discipline development and finding out the most important ideas, can always make people "I have a consistent way". James Le, a software engineer, recently summed up the ten ways in which the AI seminar needs to be learned, based on his experience, which is very inspiring.
The 10 Deep Learning Methods AI Practitioners Need to Apply
People's interest in machine learning has been exploded in the past ten years. In computer science, industry meetings, and media reports, you can see the shadow of machine learning. But it seems that in all the discussion of machine learning, people often confuse what AI can do with what they want AI to do.
Basically speaking, machine learning is actually using algorithms to extract information from raw data and express it in a variety of models. Then we use this model to infer other data that we have not yet modeled.
As a model of machine learning, neural networks have been in existence for at least 50 years. The basic unit of the neural network is the node, which simulates the nodes of the biological neurons in the mammalian brain, and the links between the nodes (also simulating the biological brain) evolve with time.
In the last century, the mid 80s and early 90s, many important neural network architecture have been made, but the computing ability in order to achieve good results also needs strong enough and roughly the amount of data set, which was at that time is not ideal, so also lead people to machine learning enthusiasm gradually dim down. At the beginning of the twenty-first Century, the computer calculation to show exponential growth, the industry has witnessed the computer technology "the Cambrian burst", which is simply unthinkable before. As an important framework in this category, deep learning has gained a lot of important machine learning competitions in the ten years of bursting growth. The heat of the bonus remains uncooled this year; today, we see deep learning in every corner of the machine learning.
To further understand this, I participated in a deep learning course, and developed an image recognition neural network and natural language processing based on recurrent neural network (RNN) and long term memory (LSTM).
Whether AI or other subjects, studying the history of disciplines from time to time in the process of learning and deliberation, summarizing the current situation of discipline development and finding out the most important ideas, can always make people "I have a consistent way". James Le, a software engineer, recently summed up the ten ways in which the AI seminar needs to be learned, based on his experience, which is very inspiring.
The 10 Deep Learning Methods AI Practitioners Need to Apply
People's interest in machine learning has been exploded in the past ten years. In computer science, industry meetings, and media reports, you can see the shadow of machine learning. But it seems that in all the discussion of machine learning, people often confuse what AI can do with what they want AI to do.
Basically speaking, machine learning is actually using algorithms to extract information from raw data and express it in a variety of models. Then we use this model to infer other data that we have not yet modeled.
As a model of machine learning, neural networks have been in existence for at least 50 years. The basic unit of the neural network is the node, which simulates the nodes of the biological neurons in the mammalian brain, and the links between the nodes (also simulating the biological brain) evolve with time.
In the last century, the mid 80s and early 90s, many important neural network architecture have been made, but the computing ability in order to achieve good results also needs strong enough and roughly the amount of data set, which was at that time is not ideal, so also lead people to machine learning enthusiasm gradually dim down. At the beginning of the twenty-first Century, the computer calculation to show exponential growth, the industry has witnessed the computer technology "the Cambrian burst", which is simply unthinkable before. As an important framework in this category, deep learning has gained a lot of important machine learning competitions in the ten years of bursting growth. The heat of the bonus remains uncooled this year; today, we see deep learning in every corner of the machine learning.
To further understand this, I participated in a deep learning course, and developed an image recognition neural network and natural language processing based on recurrent neural network (RNN) and long term memory (LSTM).
Recently, I have also begun to read some academic papers on deep learning. Here are some of the papers I've collected that have a serious impact on the development of deep learning.
1, Gradient-Based Learning Applied to Document Recognition (1998)
Significance: a convolution neural network is introduced to the machine learning world
Author: Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner
2, Deep Boltzmann Machines (2009)
Significance: a new learning algorithm is proposed for Boltzmann machine, which contains a number of hiding variable layers.
Author: Ruslan Salakhutdinov, Geoffrey Hinton
3, Building High-Level Features Using Large-Scale Unsupervised Learning (2012)
The 10 Deep Learning Methods AI Practitioners Need to Apply
People's interest in machine learning has been exploded in the past ten years. In computer science, industry meetings, and media reports, you can see the shadow of machine learning. But it seems that in all the discussion of machine learning, people often confuse what AI can do with what they want AI to do.
Basically speaking, machine learning is actually using algorithms to extract information from raw data and express it in a variety of models. Then we use this model to infer other data that we have not yet modeled.
As a model of machine learning, neural networks have been in existence for at least 50 years. The basic unit of the neural network is the node, which simulates the nodes of the biological neurons in the mammalian brain, and the links between the nodes (also simulating the biological brain) evolve with time.
In the last century, the mid 80s and early 90s, many important neural network architecture have been made, but the computing ability in order to achieve good results also needs strong enough and roughly the amount of data set, which was at that time is not ideal, so also lead people to machine learning enthusiasm gradually dim down. At the beginning of the twenty-first Century, the computer calculation to show exponential growth, the industry has witnessed the computer technology "the Cambrian burst", which is simply unthinkable before. As an important framework in this category, deep learning has gained a lot of important machine learning competitions in the ten years of bursting growth. The heat of the bonus remains uncooled this year; today, we see deep learning in every corner of the machine learning.
To further understand this, I participated in a deep learning course, and developed an image recognition neural network and natural language processing based on recurrent neural network (RNN) and long term memory (LSTM).
Whether AI or other subjects, studying the history of disciplines from time to time in the process of learning and deliberation, summarizing the current situation of discipline development and finding out the most important ideas, can always make people "I have a consistent way". James Le, a software engineer, recently summed up the ten ways in which the AI seminar needs to be learned, based on his experience, which is very inspiring.
The 10 Deep Learning Methods AI Practitioners Need to Apply
People's interest in machine learning has been exploded in the past ten years. In computer science, industry meetings, and media reports, you can see the shadow of machine learning. But it seems that in all the discussion of machine learning, people often confuse what AI can do with what they want AI to do.
Basically speaking, machine learning is actually using algorithms to extract information from raw data and express it in a variety of models. Then we use this model to infer other data that we have not yet modeled.
As a model of machine learning, neural networks have been in existence for at least 50 years. The basic unit of the neural network is the node, which simulates the nodes of the biological neurons in the mammalian brain, and the links between the nodes (also simulating the biological brain) evolve with time.
In the last century, the mid 80s and early 90s, many important neural network architecture have been made, but the computing ability in order to achieve good results also needs strong enough and roughly the amount of data set, which was at that time is not ideal, so also lead people to machine learning enthusiasm gradually dim down. At the beginning of the twenty-first Century, the computer calculation to show exponential growth, the industry has witnessed the computer technology "the Cambrian burst", which is simply unthinkable before. As an important framework in this category, deep learning has gained a lot of important machine learning competitions in the ten years of bursting growth. The heat of the bonus remains uncooled this year; today, we see deep learning in every corner of the machine learning.
To further understand this, I participated in a deep learning course, and developed an image recognition neural network and natural language processing based on recurrent neural network (RNN) and long term memory (LSTM).
Recently, I have also begun to read some academic papers on deep learning. Here are some of the papers I've collected that have a serious impact on the development of deep learning.
1, Gradient-Based Learning Applied to Document Recognition (1998)
Significance: a convolution neural network is introduced to the machine learning world
Author: Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner
2, Deep Boltzmann Machines (2009)
Significance: a new learning algorithm is proposed for Boltzmann machine, which contains a number of hiding variable layers.
Author: Ruslan Salakhutdinov, Geoffrey Hinton
3, Building High-Level Features Using Large-Scale Unsupervised Learning (2012)