News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
Artificial intelligence will change the world
Artificial intelligence will change the world
Just as electricity has changed the way industry works in the last century, artificial intelligence will dramatically change society in the next 100 years. AI is being integrated into home robots, robots, taxis, and mental health chat robots. A start-up company is using AI technology to develop robots, so that they are closer to the level of human intelligence. AI itself has entered people's daily lives, such as the digital assistant Siri and Alexa brain power. It allows consumers to shop and search online more accurately and efficiently, as well as other tasks that people take for granted.
"AI is like a new kind of power," said Dr. Andrew Ng, a co-founder of and a professor at Stanford University, who delivered a keynote speech at the AI cutting-edge conference held in Silicon Valley last week at Coursera. About 100 years ago, electricity changed every major industry. AI has reached the same level and has the ability to change all the mainstream industries in the next few years." Wu Enda said that although people thought AI was a fairly new technology, it actually existed for decades. But it's now taking off, thanks to the expansion of data and computing power.
Wu Enda says that most of the value created by AI is accomplished through supervised learning. But there are two big waves of progress: a wave of deep learning to predict whether consumers will click on online ads when the algorithm gets information about him. When the output is no longer a digital or integer, but a speech recognition, another language or audio sentence structure, the second wave of progress appeared. For example, in an unmanned vehicle, the input of the image results in other vehicle position outputs on the road.
Xuedong Huang, Microsoft's chief scientist, says that, in fact, deep learning (that is, computers learn from data sets to executive functions, rather than performing specific tasks that are programmed) is helpful for achieving the goal of speech recognition that is comparable to that of humans. In 2016, Huang Xuedong led the Microsoft team to achieve historic success, when their system recorded a 5.9% error rate, which was the same as the human transcriptome. Huang Xuedong said at the meeting: "thanks to our deep study, we can reach the human level in 20 years." Since then, the team has reduced the error rate to 5.1%.
The rise of digital assistants
Beginning in 2010, the quality of speech recognition began to improve, and finally Siri and Alexa were born. Wu Enda said, "now, you almost think it's taken for granted. In addition to that, voice is expected to replace touch input, says Amazon Alexa director Ruhi Sarikaya. The key to improving accuracy is to understand context, for example, if a person asks Alexa what dinner should do, the digital assistant must evaluate his intentions. Does he want Alexa to go to a restaurant to order, to order, or to find recipes? If he asks Alexa to find Hunger Games, does he want to listen to music, watch videos, or listen to audio books?
Hakkani-Tur Dirac Haqqani, a research scientist at Google, said that the next step in digital assistants will be a more advanced task, namely, understanding the meaning of "beyond words" (Dilek)". For example, if the user uses the phrase "later today", it may mean between 7 p.m. and 9 p.m., or 3 p.m. to 5 p.m.. The tur said, the next stage requires more complex and more vivid and interactive dialogue, field missions beyond the field boundaries. In addition, digital assistants should be able to do more things, such as easy reading and summarizing e-mail.
After speech recognition, it is computer vision, that is, the ability of computer to recognize images and classify them. As many people upload pictures and videos, adding metadata to all content becomes cumbersome, and that requires a way to categorize them. Manohar PaluriLumos, an expert on visual recognition technology at the Facebook Institute of artificial intelligence, says Facebook has developed a AI that can understand and categorize videos in large scale, called Lumos. Facebook uses Lumos to collect data, for example, to collect fireworks images and videos. The platform can also use people's posture to identify videos, such as the people around the sofa busy scene classified as "going out wandering."".
Google video understanding director Rahul Sukar (Rahul Sukthankar) added that the key is to determine the main semantic content uploaded video. To help the computer correctly identify the content of the video, Su Shan Carle's team excavated similar content that AI can learn on YouTube, such as the specific frame rate for non professional content. Su Shan, Carle, adds that an important direction for future research is the use of video to train computers. So if a robot sees a person putting cereal into a bowl of multiple angle videos, it should be able to learn by watching videos.
Alibaba uses AI to promote sales. For example, shoppers on Taobao's ecommerce website can upload photos of what they want to buy, such as a trendy handbag from a stranger on the street, and a website that offers the closest photos of handbags. Alibaba also uses augmented reality (AR) / virtual reality (VR) technology to allow people to browse and shop in stores such as Costco. On the Youku video site, Alibaba is developing a way to insert virtual 3D objects into the video uploaded by users to increase revenue. This is because many video sites are committed to increasing profitability. Ali
"AI is like a new kind of power," said Dr. Andrew Ng, a co-founder of and a professor at Stanford University, who delivered a keynote speech at the AI cutting-edge conference held in Silicon Valley last week at Coursera. About 100 years ago, electricity changed every major industry. AI has reached the same level and has the ability to change all the mainstream industries in the next few years." Wu Enda said that although people thought AI was a fairly new technology, it actually existed for decades. But it's now taking off, thanks to the expansion of data and computing power.
Wu Enda says that most of the value created by AI is accomplished through supervised learning. But there are two big waves of progress: a wave of deep learning to predict whether consumers will click on online ads when the algorithm gets information about him. When the output is no longer a digital or integer, but a speech recognition, another language or audio sentence structure, the second wave of progress appeared. For example, in an unmanned vehicle, the input of the image results in other vehicle position outputs on the road.
Xuedong Huang, Microsoft's chief scientist, says that, in fact, deep learning (that is, computers learn from data sets to executive functions, rather than performing specific tasks that are programmed) is helpful for achieving the goal of speech recognition that is comparable to that of humans. In 2016, Huang Xuedong led the Microsoft team to achieve historic success, when their system recorded a 5.9% error rate, which was the same as the human transcriptome. Huang Xuedong said at the meeting: "thanks to our deep study, we can reach the human level in 20 years." Since then, the team has reduced the error rate to 5.1%.
The rise of digital assistants
Beginning in 2010, the quality of speech recognition began to improve, and finally Siri and Alexa were born. Wu Enda said, "now, you almost think it's taken for granted. In addition to that, voice is expected to replace touch input, says Amazon Alexa director Ruhi Sarikaya. The key to improving accuracy is to understand context, for example, if a person asks Alexa what dinner should do, the digital assistant must evaluate his intentions. Does he want Alexa to go to a restaurant to order, to order, or to find recipes? If he asks Alexa to find Hunger Games, does he want to listen to music, watch videos, or listen to audio books?
Hakkani-Tur Dirac Haqqani, a research scientist at Google, said that the next step in digital assistants will be a more advanced task, namely, understanding the meaning of "beyond words" (Dilek)". For example, if the user uses the phrase "later today", it may mean between 7 p.m. and 9 p.m., or 3 p.m. to 5 p.m.. The tur said, the next stage requires more complex and more vivid and interactive dialogue, field missions beyond the field boundaries. In addition, digital assistants should be able to do more things, such as easy reading and summarizing e-mail.
After speech recognition, it is computer vision, that is, the ability of computer to recognize images and classify them. As many people upload pictures and videos, adding metadata to all content becomes cumbersome, and that requires a way to categorize them. Manohar PaluriLumos, an expert on visual recognition technology at the Facebook Institute of artificial intelligence, says Facebook has developed a AI that can understand and categorize videos in large scale, called Lumos. Facebook uses Lumos to collect data, for example, to collect fireworks images and videos. The platform can also use people's posture to identify videos, such as the people around the sofa busy scene classified as "going out wandering."".
Google video understanding director Rahul Sukar (Rahul Sukthankar) added that the key is to determine the main semantic content uploaded video. To help the computer correctly identify the content of the video, Su Shan Carle's team excavated similar content that AI can learn on YouTube, such as the specific frame rate for non professional content. Su Shan, Carle, adds that an important direction for future research is the use of video to train computers. So if a robot sees a person putting cereal into a bowl of multiple angle videos, it should be able to learn by watching videos.
Alibaba uses AI to promote sales. For example, shoppers on Taobao's ecommerce website can upload photos of what they want to buy, such as a trendy handbag from a stranger on the street, and a website that offers the closest photos of handbags. Alibaba also uses augmented reality (AR) / virtual reality (VR) technology to allow people to browse and shop in stores such as Costco. On the Youku video site, Alibaba is developing a way to insert virtual 3D objects into the video uploaded by users to increase revenue. This is because many video sites are committed to increasing profitability. Ali