News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
Ten kinds of depth learning algorithms
Ten kinds of depth learning algorithms
In a broad sense, there are three kinds of machine learning algorithms
.
1, supervised learning
Work mechanism: this algorithm consists of a target variable or a result variable (or dependent variable). These variables are predicted by a known series of predictive variables (independent variables). Using this series of variables, we generate a function that maps the input value to the expected output value. This training process will continue until the model gets the desired accuracy on the training data. Examples of supervised learning include regression, decision tree, random forest, K - nearest neighbor algorithm, logical regression, and so on.
2, unsupervised learning
Working mechanism: in this algorithm, no target variable or result variable is expected to be predicted or estimated. This algorithm is used for clustering analysis in different groups. This analysis is widely used to subdivide customers and divide them into different user groups according to the intervention. Examples of unsupervised learning include the association algorithm and the K - mean algorithm.
3, intensive learning
Work mechanism: this algorithm trains machines to make decisions. It works like this: the machine is placed in an environment that allows it to train itself through repeated trial and error. The machine learns from past experience and tries to make accurate business judgments using knowledge of the most thorough knowledge. Examples of intensive learning include Markov's decision-making process.
List of common machine learning algorithms
Here is a list of common machine learning algorithms. These algorithms can be used almost all of the data problems:
linear regression
logistic regression
decision tree
SVM
Naive Bayes
knn
k-means
Random forest algorithm
Dimensionality reduction algorithm
Gradient Boost and Adaboost algorithm
1, linear regression
Linear regression is usually used to estimate the actual values (house prices, call times, total sales, etc.) based on continuous variables. We set up the relationship between the independent variables and the dependent variables by fitting the best straight line. The best line is called the regression line and is represented by the linear equation of Y= a *X + B.
The best way to understand linear regression is to look back on childhood. Suppose we don't ask the weight of the other person, let a grade five child sort the class according to the order of light weight to heavy weight. What do you think the child will do? He (she) is likely to look at the height and body of the people, and combine these visible parameters to arrange them. This is an example of the use of linear regression in real life. In fact, the child has found a certain relationship between height and body weight and weight, and the relationship looks like the equation above.
In this equation:
Y: dependent variable
A: slope
X: independent variable
B: intercept
The coefficients a and B can be obtained by the least square method.
See the next example. We find the best fitting line y=0.2811x+13.9. Given the height of a person, we can find weight through this equation.
The two main types of linear regression are linear regression and multiple linear regression. The characteristic of a linear regression is that there is only one independent variable. The characteristic of multiple linear regression is like its name, and there are many independent variables. When looking for the best fitting line, you can fit multiple or curvilinear regression. These are called multiple or curvilinear regression.
Python code
#Import Library #Import other necessary libraries like pandas numpy from sklearn import linear_model #Load... Train and Test datasets #Identify feature and response variable (s) and values must be numeric and numpy arrays x_train= input_variables_values_training_datasets
Y_train=target_variables_values_training_datasets
X_test=input_variables_values_test_datasets Create linear regression object linear # = linear_model.LinearRegression (Train) model using the # the training sets and Check score linear.fit (x_train, y_train)
Linear.score (x_train, y_train) #Equation coefficient and Intercept print ('Coefficient: n', linear.coef_)
Print ('Intercept: n', linear.intercept_) #Predict Output predicted= linear.predict (x_test)
R code
#Load Train and Test datasets #Identify feature and response variable (s) and values must be numeric and numpy arrays x_train input_variables_values_training_datasets y_train target_variables_values_training_datasets x_test < - < - < - input_variables_values_test_datasets x cbind (x_train, y_train < - # Train the model using the) training sets and Check score linear LM (y_train < - ~., data = x (linear) summary #Predict Output predicted= (linear) predict, x_test)
2. Logistic regression
Don't be bewildered by its name! This is a classification algorithm rather than a regression algorithm. The algorithm can estimate discrete values based on a known series of dependent variables (for example, 0 or 1 of binary values, yes or no, true or false). In simple terms, it estimates the probability of an event by fitting the data into a logical function. Therefore, it is also called logical regression. Because it estimates the probability, its output value is between 0 and 1 (as expected).
Let's take a simple example to understand this algorithm again.
Suppose your friend lets you unravel a puzzle. This will only have two results: you have undone it or you have not unsolved it. Imagine that you have to answer many questions to find out what you are good at. The result of this study would be like this: suppose the title is a trigonometric problem in grade ten, and 70% of you may unravel the problem. However, if the title is a history problem in grade five, you have only 30% of the possibility to answer the right question.
.
1, supervised learning
Work mechanism: this algorithm consists of a target variable or a result variable (or dependent variable). These variables are predicted by a known series of predictive variables (independent variables). Using this series of variables, we generate a function that maps the input value to the expected output value. This training process will continue until the model gets the desired accuracy on the training data. Examples of supervised learning include regression, decision tree, random forest, K - nearest neighbor algorithm, logical regression, and so on.
2, unsupervised learning
Working mechanism: in this algorithm, no target variable or result variable is expected to be predicted or estimated. This algorithm is used for clustering analysis in different groups. This analysis is widely used to subdivide customers and divide them into different user groups according to the intervention. Examples of unsupervised learning include the association algorithm and the K - mean algorithm.
3, intensive learning
Work mechanism: this algorithm trains machines to make decisions. It works like this: the machine is placed in an environment that allows it to train itself through repeated trial and error. The machine learns from past experience and tries to make accurate business judgments using knowledge of the most thorough knowledge. Examples of intensive learning include Markov's decision-making process.
List of common machine learning algorithms
Here is a list of common machine learning algorithms. These algorithms can be used almost all of the data problems:
linear regression
logistic regression
decision tree
SVM
Naive Bayes
knn
k-means
Random forest algorithm
Dimensionality reduction algorithm
Gradient Boost and Adaboost algorithm
1, linear regression
Linear regression is usually used to estimate the actual values (house prices, call times, total sales, etc.) based on continuous variables. We set up the relationship between the independent variables and the dependent variables by fitting the best straight line. The best line is called the regression line and is represented by the linear equation of Y= a *X + B.
The best way to understand linear regression is to look back on childhood. Suppose we don't ask the weight of the other person, let a grade five child sort the class according to the order of light weight to heavy weight. What do you think the child will do? He (she) is likely to look at the height and body of the people, and combine these visible parameters to arrange them. This is an example of the use of linear regression in real life. In fact, the child has found a certain relationship between height and body weight and weight, and the relationship looks like the equation above.
In this equation:
Y: dependent variable
A: slope
X: independent variable
B: intercept
The coefficients a and B can be obtained by the least square method.
See the next example. We find the best fitting line y=0.2811x+13.9. Given the height of a person, we can find weight through this equation.
The two main types of linear regression are linear regression and multiple linear regression. The characteristic of a linear regression is that there is only one independent variable. The characteristic of multiple linear regression is like its name, and there are many independent variables. When looking for the best fitting line, you can fit multiple or curvilinear regression. These are called multiple or curvilinear regression.
Python code
#Import Library #Import other necessary libraries like pandas numpy from sklearn import linear_model #Load... Train and Test datasets #Identify feature and response variable (s) and values must be numeric and numpy arrays x_train= input_variables_values_training_datasets
Y_train=target_variables_values_training_datasets
X_test=input_variables_values_test_datasets Create linear regression object linear # = linear_model.LinearRegression (Train) model using the # the training sets and Check score linear.fit (x_train, y_train)
Linear.score (x_train, y_train) #Equation coefficient and Intercept print ('Coefficient: n', linear.coef_)
Print ('Intercept: n', linear.intercept_) #Predict Output predicted= linear.predict (x_test)
R code
#Load Train and Test datasets #Identify feature and response variable (s) and values must be numeric and numpy arrays x_train input_variables_values_training_datasets y_train target_variables_values_training_datasets x_test < - < - < - input_variables_values_test_datasets x cbind (x_train, y_train < - # Train the model using the) training sets and Check score linear LM (y_train < - ~., data = x (linear) summary #Predict Output predicted= (linear) predict, x_test)
2. Logistic regression
Don't be bewildered by its name! This is a classification algorithm rather than a regression algorithm. The algorithm can estimate discrete values based on a known series of dependent variables (for example, 0 or 1 of binary values, yes or no, true or false). In simple terms, it estimates the probability of an event by fitting the data into a logical function. Therefore, it is also called logical regression. Because it estimates the probability, its output value is between 0 and 1 (as expected).
Let's take a simple example to understand this algorithm again.
Suppose your friend lets you unravel a puzzle. This will only have two results: you have undone it or you have not unsolved it. Imagine that you have to answer many questions to find out what you are good at. The result of this study would be like this: suppose the title is a trigonometric problem in grade ten, and 70% of you may unravel the problem. However, if the title is a history problem in grade five, you have only 30% of the possibility to answer the right question.