News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
With regard to artificial intelligence, we must understand a few principles
With regard to artificial intelligence, we must understand a few principles
Every technological progress has brought a fear and uncertainty.
One, restraint the unfounded fear
We see this situation repeated. Since the industrial revolution, people are constantly considering the impact of new technologies on their lives and work. Today we see that every time AI break, fear will appear. Although artificial intelligence has made great progress in recent years, it is still in the early stage, which brings a certain level of uncertainty. This uncertainty will be aggravated only when the problem is presented or the expectation is greater than the ideal, which leads to misunderstanding and anxiety. As an outspoken critic of artificial intelligence, the misunderstanding of Elon Musk application depicts a coming picture of artificial intelligence catastrophe, though he embeds powerful AI into Tesla's car. All of this indicates that at some level, we find myself falling into a risky, unnecessary hype cycle.
We must control this unfounded fear. The ideal is this: there is no credible discussion today to support these end scenes. They are convincing novels. I like to see the terminator, just like my other children at my age, but these interesting scenes can't make us deal with the direct threat brought by AI.
The main problems we face are stereotypes and diversity, which are more direct and humanized than singularity and robot riots. These problems include the imbedded stereotypes and the lack of diversity of categories and data sets. Using the stereotyped data to exercise artificial intelligence, we may instillently instil our own stereotypes in artificial intelligence. If it is not controlled, prejudices can lead to artificial intelligence for some people, and others will pay for it. Without increasing the diversity of the category, some people will have a greater impact on the decision to hide behind the invention of artificial intelligence. As AI is integrated into the decision-making process that has greater impact on personal life, such as recruitment, loan application, judicial examination and medical decision, we will need to be alert to absorb our worst tendency.
Two. No innocent data
When artificial intelligence touches the most fundamental human system, we need to remember that it does not operate in a vacuum. Artificial intelligence relies on massive amounts of data, and powerful algorithms can dissect these data and then draw insights that are enlightening. But artificial intelligence relies on the exercise of data. If the data is stereotypical, such as racist or sexism, it will affect the results. Whatever you exercise artificial intelligence, the result will be magnified, because the algorithm replicating its decisions countless times. The undiscovered prejudices in data are gradually appearing, which is disturbing. This is the result of artificial intelligence system initial output reflecting our most deep-rooted prejudices. Unlike robotic riots, stereotyped artificial intelligence is not a hypothetical risk. The stereotyped artificial intelligence in the beauty contest is a player who chooses light skinned players rather than dark skinned skinners. A prejudiced Google algorithm will be classified as black gorilla.
In a study, the selection of resumes by an intelligent artificial intelligence would be more inclined to European Americans (about African Americans). In another study, prejudiced AI linked men's names with career oriented, mathematical and scientific vocabularies, and linked the names of women with artistic concepts. Just like our own clicks let us be in our Facebook filter bubble, and the stereotyped data invented artificial intelligence that spread human prejudice.
Although the stereotypes of human beings have formed these challenges, human insight can deal with these problems. The algorithm has made great progress in eliminating false news and discriminating discrimination, but human surveillance will become a necessary condition for building a more equitable AI system. In the ongoing discussion of "how AI will change work", it is easy to think of a new role: artificial intelligence monitor. We still need to manually check the input and output of artificial intelligence.
Three, more fair artificial intelligence
This brings us to second related problems: establishing a more equitable artificial intelligence: we need more diversity in the communities of researchers and developers who develop these systems.
Several studies have reminded the industry of serious disequilibrium. According to Code.org data, the proportion of black, Latino, American Indians and native Pacific Islanders is significantly lower, accounting for only 17% of the total number of computer science majors. Less representative often
One, restraint the unfounded fear
We see this situation repeated. Since the industrial revolution, people are constantly considering the impact of new technologies on their lives and work. Today we see that every time AI break, fear will appear. Although artificial intelligence has made great progress in recent years, it is still in the early stage, which brings a certain level of uncertainty. This uncertainty will be aggravated only when the problem is presented or the expectation is greater than the ideal, which leads to misunderstanding and anxiety. As an outspoken critic of artificial intelligence, the misunderstanding of Elon Musk application depicts a coming picture of artificial intelligence catastrophe, though he embeds powerful AI into Tesla's car. All of this indicates that at some level, we find myself falling into a risky, unnecessary hype cycle.
We must control this unfounded fear. The ideal is this: there is no credible discussion today to support these end scenes. They are convincing novels. I like to see the terminator, just like my other children at my age, but these interesting scenes can't make us deal with the direct threat brought by AI.
The main problems we face are stereotypes and diversity, which are more direct and humanized than singularity and robot riots. These problems include the imbedded stereotypes and the lack of diversity of categories and data sets. Using the stereotyped data to exercise artificial intelligence, we may instillently instil our own stereotypes in artificial intelligence. If it is not controlled, prejudices can lead to artificial intelligence for some people, and others will pay for it. Without increasing the diversity of the category, some people will have a greater impact on the decision to hide behind the invention of artificial intelligence. As AI is integrated into the decision-making process that has greater impact on personal life, such as recruitment, loan application, judicial examination and medical decision, we will need to be alert to absorb our worst tendency.
Two. No innocent data
When artificial intelligence touches the most fundamental human system, we need to remember that it does not operate in a vacuum. Artificial intelligence relies on massive amounts of data, and powerful algorithms can dissect these data and then draw insights that are enlightening. But artificial intelligence relies on the exercise of data. If the data is stereotypical, such as racist or sexism, it will affect the results. Whatever you exercise artificial intelligence, the result will be magnified, because the algorithm replicating its decisions countless times. The undiscovered prejudices in data are gradually appearing, which is disturbing. This is the result of artificial intelligence system initial output reflecting our most deep-rooted prejudices. Unlike robotic riots, stereotyped artificial intelligence is not a hypothetical risk. The stereotyped artificial intelligence in the beauty contest is a player who chooses light skinned players rather than dark skinned skinners. A prejudiced Google algorithm will be classified as black gorilla.
In a study, the selection of resumes by an intelligent artificial intelligence would be more inclined to European Americans (about African Americans). In another study, prejudiced AI linked men's names with career oriented, mathematical and scientific vocabularies, and linked the names of women with artistic concepts. Just like our own clicks let us be in our Facebook filter bubble, and the stereotyped data invented artificial intelligence that spread human prejudice.
We can't escape the obligation by obeying artificial intelligence. The more we put these systems into our decision-making process, the more we have to do to ensure that we can use these systems in the field. The first step in dealing with data stereotypes is to create greater transparency in the process of data collection. Where does it come from? How does it collect it? Who did it collect? We also need to deal with the stereotypes in the model. After making the internal work of our model more clear, we will be able to discover the stereotypes in the data we have not found before.
Although the stereotypes of human beings have formed these challenges, human insight can deal with these problems. The algorithm has made great progress in eliminating false news and discriminating discrimination, but human surveillance will become a necessary condition for building a more equitable AI system. In the ongoing discussion of "how AI will change work", it is easy to think of a new role: artificial intelligence monitor. We still need to manually check the input and output of artificial intelligence.
Three, more fair artificial intelligence
This brings us to second related problems: establishing a more equitable artificial intelligence: we need more diversity in the communities of researchers and developers who develop these systems.
Several studies have reminded the industry of serious disequilibrium. According to Code.org data, the proportion of black, Latino, American Indians and native Pacific Islanders is significantly lower, accounting for only 17% of the total number of computer science majors. Less representative often