News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
Future artificial intelligence is likely to raise strong opposition from the public
Future artificial intelligence is likely to raise strong opposition from the public
The researchers say social, ethical and political concerns are intensifying, and artificial intelligence is in need of more oversight. Experts warn that the technology industry, artificial intelligence (AI) emerging field may lead to strong opposition from the public, because it is gradually falling into the hands of private enterprises, a threat to people's work, and in the operation without effective supervision or regulation under control.
In the new series of reports about artificial intelligence, industry experts stressed the great potential of this technology, this technology has accelerated the pace of scientific and medical research, but also make the city run more smoothly, improve the efficiency of enterprises.
Although the future of artificial intelligence revolution, but in the absence of regulatory agencies, legislative bodies and government supervision, people have a social, ethical and political more and more concern for the development of the technology. The researchers told the guardian:
People's strong resistance may reduce the benefits of artificial intelligence.
Brain drain to private enterprises is not conducive to the development of colleges and universities.
Expertise and wealth are concentrated in a handful of companies.
There's a huge diversity of issues in this area.
In October, sir Wendy Hall, a professor of computer science at University of Southampton, CO chaired an independent assessment report on the UK's artificial intelligence industry. The report found that artificial intelligence is likely to add 630 billion pounds to the economy by 2035. But she says that in return, this technology must benefit society.
"Artificial intelligence will affect all aspects of our infrastructure," she said. "We have to make sure that it's good for us, and we have to think about everything. When machines can learn and do things for themselves, what are the risks we face in human society? It's important that countries that understand these issues will be winners of the next industrial revolution."
Today, the responsibility for developing safe, ethical artificial intelligence is almost entirely on the companies that build them. However, there is no testing standard, and no organization can monitor and investigate any bad decisions or accidents caused by any artificial intelligence.
New York University contemporary Artificial Intelligence Research Institute Co director Kate Crawford said: "we need to have strong independent institutions, as well as specialized experts and knowledgeable researchers, they can act as a supervisor, make the company responsible for the high standard. These systems are becoming new infrastructures, and importantly, they are both secure and fair."
Many modern artificial intelligence is made by training in massive data sets before making decisions. However, if the data itself contains bias, then the data will be inherited and repeated by artificial intelligence.
Earlier this year, an artificial intelligence that was used to parse language was found to show gender and racial bias. The other is artificial intelligence for image recognition, which classifies chefs as women, even when they take pictures of bald men. Many other institutions, including tools used in policing and prisoner risk assessments, have been shown to discriminate against blacks.
The industry has a serious diversity problem, partly due to artificial intelligence discriminates against women and ethnic minorities. In Google and face book (Facebook) companies, 4/5 of the technical staff is male. White men occupy the dominant position in the field of artificial intelligence, developed the application only for the health of the male body, will be called the black gorilla photo service, and can not identify the female voice speech recognition system. Holzer said: "software should be designed by a variety of employees, rather than ordinary white men, because we both men and women, ethnic backgrounds, will become users."."
Artificial intelligence that tests poorly or fails to use is equally worrisome. Last year, when Tesla, a Model S autopilot, failed to see a truck passing through the freeway, the American driver died. The US National Transportation Safety Council investigated Tesla's fatal accident, criticizing Tesla's release of an autopilot system that lacked adequate protection. Ellen Mask, CEO of the company, is one of the most active advocates of artificial intelligence safety and regulation.
However, with the social media in the British referendum and the preparation of the 2016 American election, there is a serious problem, the use of artificial intelligence systems to manipulate the public caused more and more concerns. Toby Walsh, a professor of artificial intelligence at University of New South Wales, said: "a technical arms race is underway to see who can influence voters."". He recently wrote a book on artificial intelligence, called "the dream of robots".
The main AI researchers went to the house of Lords artificial intelligence committee
In the new series of reports about artificial intelligence, industry experts stressed the great potential of this technology, this technology has accelerated the pace of scientific and medical research, but also make the city run more smoothly, improve the efficiency of enterprises.
Although the future of artificial intelligence revolution, but in the absence of regulatory agencies, legislative bodies and government supervision, people have a social, ethical and political more and more concern for the development of the technology. The researchers told the guardian:
People's strong resistance may reduce the benefits of artificial intelligence.
Brain drain to private enterprises is not conducive to the development of colleges and universities.
Expertise and wealth are concentrated in a handful of companies.
There's a huge diversity of issues in this area.
In October, sir Wendy Hall, a professor of computer science at University of Southampton, CO chaired an independent assessment report on the UK's artificial intelligence industry. The report found that artificial intelligence is likely to add 630 billion pounds to the economy by 2035. But she says that in return, this technology must benefit society.
"Artificial intelligence will affect all aspects of our infrastructure," she said. "We have to make sure that it's good for us, and we have to think about everything. When machines can learn and do things for themselves, what are the risks we face in human society? It's important that countries that understand these issues will be winners of the next industrial revolution."
Today, the responsibility for developing safe, ethical artificial intelligence is almost entirely on the companies that build them. However, there is no testing standard, and no organization can monitor and investigate any bad decisions or accidents caused by any artificial intelligence.
New York University contemporary Artificial Intelligence Research Institute Co director Kate Crawford said: "we need to have strong independent institutions, as well as specialized experts and knowledgeable researchers, they can act as a supervisor, make the company responsible for the high standard. These systems are becoming new infrastructures, and importantly, they are both secure and fair."
Many modern artificial intelligence is made by training in massive data sets before making decisions. However, if the data itself contains bias, then the data will be inherited and repeated by artificial intelligence.
Earlier this year, an artificial intelligence that was used to parse language was found to show gender and racial bias. The other is artificial intelligence for image recognition, which classifies chefs as women, even when they take pictures of bald men. Many other institutions, including tools used in policing and prisoner risk assessments, have been shown to discriminate against blacks.
The industry has a serious diversity problem, partly due to artificial intelligence discriminates against women and ethnic minorities. In Google and face book (Facebook) companies, 4/5 of the technical staff is male. White men occupy the dominant position in the field of artificial intelligence, developed the application only for the health of the male body, will be called the black gorilla photo service, and can not identify the female voice speech recognition system. Holzer said: "software should be designed by a variety of employees, rather than ordinary white men, because we both men and women, ethnic backgrounds, will become users."."
Artificial intelligence that tests poorly or fails to use is equally worrisome. Last year, when Tesla, a Model S autopilot, failed to see a truck passing through the freeway, the American driver died. The US National Transportation Safety Council investigated Tesla's fatal accident, criticizing Tesla's release of an autopilot system that lacked adequate protection. Ellen Mask, CEO of the company, is one of the most active advocates of artificial intelligence safety and regulation.
However, with the social media in the British referendum and the preparation of the 2016 American election, there is a serious problem, the use of artificial intelligence systems to manipulate the public caused more and more concerns. Toby Walsh, a professor of artificial intelligence at University of New South Wales, said: "a technical arms race is underway to see who can influence voters."". He recently wrote a book on artificial intelligence, called "the dream of robots".
"We've worked out some rules that limit the amount of money that affects people voting in a certain way. I think we're going to have new restrictions that limit how much technology you can use to influence people."
The main AI researchers went to the house of Lords artificial intelligence committee