News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
The framework of deep learning and machine learning
The framework of deep learning and machine learning
There is a difference between a machine learning framework and a deep learning framework. In essence, machine learning framework covers various learning methods for classification, regression, clustering, anomaly detection and data preparation, and it can or may not include neural network method. Deep learning or deep neural network (DNN) framework covers a variety of neural network topology with a number of hiding layers. These layers include the multistep process of pattern recognition. The more layers in the network, the more complex the features that can be extracted for clustering and classification.
Caffe, CNTK, DeepLearning4j, Keras, MXNet, and TensorFlow are the deep learning framework. Scikit-learning and Spark MLlib are machine learning frameworks. Theano overstepped the two categories.
In general, deep neural network computing runs at an order of magnitude faster than CPU in GPU (especially Nvidia CUDA general GPU, most frameworks). In general, a simpler machine learning method does not require the acceleration of GPU.
Although you can exercise DNN on one or more CPU, exercise is often slow, and slowly I don't mean seconds or minutes. The more neurons and layers of the required exercise are, the more data that can be used to exercise, the longer the demand is. When the Google Brain team translated the language translation model for the new version of Google in 2016, they worked on multiple GPU for a week of exercise time. Without GPU, each model exercise experiment will be required for a few months.
Each framework in these frameworks has at least one significant feature. The strength of Caffe is a convolution DNN for image recognition. Cognitive Toolkit has a separate evaluation library for deploying prediction models working on ASP.Net sites. MXNet has good scalability and can be used for multiple GPU and multi machine configuration exercises. Scikit-learn has a universal and powerful machine learning method, and it is easy to learn. Spark MLlib is integrated with Hadoop and has good machine learning scalability. TensorFlow provides a common diagnostic tool for its network diagram TensorBoard.
On the other hand, all deep learning frameworks have almost the same speed of exercise on GPU. This is because the exercise internal loop spends most of the time in the Nvidia CuDNN package. However, each framework adopts a different way to depict the neural network. There are two main camps: using graphic files to map out the camps, and the execution of code to create their painted camps.
Thinking about this, let's look at the features of each frame.
Caffe (coffee)
Caffe deep learning project is initially a powerful image classification framework. It seems to be stagnant, based on its continuous bug, and it has been stuck with the fact that the 1 edition of RC3 is more than a year, and the founder has separated projects. It still has good convolution network image recognition and good support for Nvidia CUDA GPU, as well as a simple network stroke format. On the other hand, its models usually require large amounts of GPU memory (beyond 1GB) to run, its files are multi spots and problems, and support is hard to get. Installation is iffy, especially about its Python notebook support.
Caffe has the command line, the Python and the Matlab interface, which relies on the ProtoText file to define its model and solver. Caffe defines a network layer by layer in its own model. The network defines the whole model from the input data to the loss. When the data and information in the forward and backward traversal of the network, Caffe storage, communication and operation information for blob (binary large object), internal N dimensional array is stored in a C continuous mode (line of the array is stored in a memory block in continuous speech, as in C). Blob is to Caffe, such as Tensor to TensorFlow.
The layer performs an operation on the blob and constitutes a component of the Caffe model. Layer convolution filter, execution pool, internal product, normalization, loading data and computing loss such as softmax and hinge by normalization, such as rectification, linear transformation, S shape and other elements transformation.
Caffe has proved its effectiveness in image classification, but its time seems to have passed. I recommend using TensorFlow, MXNet, or CNTK unless the existing Caffe model fits your needs, or you can terminate the fine tune according to your purpose.
A precomputed CaffeJupyter notebook that appears in the NBViewer. The notebook explained the use of a beloved kitten to do "surgery" on the Caffe network.
Microsoft Cognitive Toolkit (Microsoft cognitive Toolkit)
Microsoft Cognitive Toolkit is a fast, easy to use deep learning software package, but its scope is limited compared to TensorFlow. It has various models and algorithms, excellent support for Python and Jupyter notebooks, an interesting declarative neural network configuration language BrainScript, and automatic deployment in Windows and Ubuntu Linux environment.
In terms of defects, when I checked the Beta 1 document, the document was not well updated to CNTK 2, and the package did not support MacOS. Of course, since Beta 1, CNTK 2 has made many improvements, including new memory reduction methods to reduce the memory usage of GPU and new Nuget installer, but MacOS support is still missing.
The PythonAPI added to Beta 1 helps to bring the cognitive toolkit into the mainstream, Python - code, deep learning researchers. API consists of model definition and computing, learning algorithms, data reading, and distributed exercise. As a supplement to PythonAPI, CNTK 2 has new Python examples and tutorials, as well as support for the serialization of Google Protocol Buffer. The tutorial is done with a Jupyter notebook.
CNTK 2 components can be disposed of from Python, C + + or BrainScr
Caffe, CNTK, DeepLearning4j, Keras, MXNet, and TensorFlow are the deep learning framework. Scikit-learning and Spark MLlib are machine learning frameworks. Theano overstepped the two categories.
In general, deep neural network computing runs at an order of magnitude faster than CPU in GPU (especially Nvidia CUDA general GPU, most frameworks). In general, a simpler machine learning method does not require the acceleration of GPU.
Although you can exercise DNN on one or more CPU, exercise is often slow, and slowly I don't mean seconds or minutes. The more neurons and layers of the required exercise are, the more data that can be used to exercise, the longer the demand is. When the Google Brain team translated the language translation model for the new version of Google in 2016, they worked on multiple GPU for a week of exercise time. Without GPU, each model exercise experiment will be required for a few months.
Each framework in these frameworks has at least one significant feature. The strength of Caffe is a convolution DNN for image recognition. Cognitive Toolkit has a separate evaluation library for deploying prediction models working on ASP.Net sites. MXNet has good scalability and can be used for multiple GPU and multi machine configuration exercises. Scikit-learn has a universal and powerful machine learning method, and it is easy to learn. Spark MLlib is integrated with Hadoop and has good machine learning scalability. TensorFlow provides a common diagnostic tool for its network diagram TensorBoard.
On the other hand, all deep learning frameworks have almost the same speed of exercise on GPU. This is because the exercise internal loop spends most of the time in the Nvidia CuDNN package. However, each framework adopts a different way to depict the neural network. There are two main camps: using graphic files to map out the camps, and the execution of code to create their painted camps.
Thinking about this, let's look at the features of each frame.
Caffe (coffee)
Caffe deep learning project is initially a powerful image classification framework. It seems to be stagnant, based on its continuous bug, and it has been stuck with the fact that the 1 edition of RC3 is more than a year, and the founder has separated projects. It still has good convolution network image recognition and good support for Nvidia CUDA GPU, as well as a simple network stroke format. On the other hand, its models usually require large amounts of GPU memory (beyond 1GB) to run, its files are multi spots and problems, and support is hard to get. Installation is iffy, especially about its Python notebook support.
Caffe has the command line, the Python and the Matlab interface, which relies on the ProtoText file to define its model and solver. Caffe defines a network layer by layer in its own model. The network defines the whole model from the input data to the loss. When the data and information in the forward and backward traversal of the network, Caffe storage, communication and operation information for blob (binary large object), internal N dimensional array is stored in a C continuous mode (line of the array is stored in a memory block in continuous speech, as in C). Blob is to Caffe, such as Tensor to TensorFlow.
The layer performs an operation on the blob and constitutes a component of the Caffe model. Layer convolution filter, execution pool, internal product, normalization, loading data and computing loss such as softmax and hinge by normalization, such as rectification, linear transformation, S shape and other elements transformation.
Caffe has proved its effectiveness in image classification, but its time seems to have passed. I recommend using TensorFlow, MXNet, or CNTK unless the existing Caffe model fits your needs, or you can terminate the fine tune according to your purpose.
A precomputed CaffeJupyter notebook that appears in the NBViewer. The notebook explained the use of a beloved kitten to do "surgery" on the Caffe network.
Microsoft Cognitive Toolkit (Microsoft cognitive Toolkit)
Microsoft Cognitive Toolkit is a fast, easy to use deep learning software package, but its scope is limited compared to TensorFlow. It has various models and algorithms, excellent support for Python and Jupyter notebooks, an interesting declarative neural network configuration language BrainScript, and automatic deployment in Windows and Ubuntu Linux environment.
In terms of defects, when I checked the Beta 1 document, the document was not well updated to CNTK 2, and the package did not support MacOS. Of course, since Beta 1, CNTK 2 has made many improvements, including new memory reduction methods to reduce the memory usage of GPU and new Nuget installer, but MacOS support is still missing.
The PythonAPI added to Beta 1 helps to bring the cognitive toolkit into the mainstream, Python - code, deep learning researchers. API consists of model definition and computing, learning algorithms, data reading, and distributed exercise. As a supplement to PythonAPI, CNTK 2 has new Python examples and tutorials, as well as support for the serialization of Google Protocol Buffer. The tutorial is done with a Jupyter notebook.
CNTK 2 components can be disposed of from Python, C + + or BrainScr