News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
Let robots understand the world better
Let robots understand the world better
In science fiction, parallel universes are mostly made up of robots indistinguishable from human beings. These robots are usually smarter, faster and stronger than we are. They seem to be able to do whatever they can imagine, from driving interplanetary spacecraft, battling alien invasions, cleaning rubbish, and cooking gourmet food.
Reality, of course, is far from fantasy. In addition to the industrial environment, robots have not yet reached the level of robots in the Jason family. The public access to the robot but seems to be some big size plastic toys, they are set in advance to perform a series of tasks in the program, but not with the surrounding environment or their Creator for meaningful interaction.
In the words of PayPal co-founder and technology entrepreneur PeterThiel, "we wanted cool robots, but we got a hamburger robot Flippy with 140 word input restrictions."". But scientists are making progress, giving robots the same ability as humans to observe and respond to their surroundings.
This month, the field has made some new progress, in the annual conference and the Massachusetts Cambridge robot science and systems conference put forward new issues about robots, discusses some topics, including how to make robots more talkative, how to help the robot to understand fuzzy language, and help the robot in complex space observation in the navigation and etc..
Optimized vision
Duke University graduate BenBurchfiel and his mentor GeorgeKonidaris, assistant professor of computer science at Brown University, this paper proposes a new algorithm, can allow the machine to be more like humans from the perspective of the world.
In the paper, Burchfiel and Konidaris show how they can teach the robot object recognition and manipulation of three-dimensional objects as much as possible even if items may be covered or placed in a strange position, such as overturned teapot.
The researchers trained robot algorithms for 3D scans of about 4000 common household items, such as beds, chairs, tables, toilets, and so on. Then, they tested the robot's visual ability at the bird's perspective and identified the ability to observe 900 new 3D objects. Compared with other computer vision techniques, the accuracy rate of the algorithm is as high as 75%, with only 50% accuracy.
The researchers said they were not the first to study and train machines to categorize 3D objects. But unlike other studies, they restrict the space in which robots learn to classify objects.
"Imagine all the possible existence of objects in space," the researchers explained: "that is to say, if you have a mini Lego blocks, I tell you, you can stick them together to create something different. You can create a lot of things!"
This infinite possibility may eventually create an object that cannot be identified to humans or machines.
To solve this problem, researchers let their algorithms find a more limited space to accommodate the objects they would identify. By working in this finite space, mathematically, we call subspaces, greatly simplifying classification tasks. It is precisely because of the discovery of this space that researchers have different methods of the past.
Obey orders
At the same time, two undergraduates at Brown University have found a way to make robots better understand the concept of direction, even at different levels of abstraction.
The study, led by DilipArumugam and SiddharthKaramcheti, explores how to train robots to understand the nuances of natural language and how to follow instructions properly and efficiently.
"The problem is that commands can have different levels of abstraction, which can result in robots unable to plan their actions effectively, or they can't accomplish tasks at all." Arumugam says.
In this project, researchers provided instructions for moving virtual robots to online domains. The space consists of several rooms and a chair, and the robot is told to operate from one place to another. Volunteers send out instructions to robots, which are more general, such as "bringing chairs to a blue room" and a step by step instruction.
The researchers then used instruction databases to teach the system to understand different expressions. Not only did the machine learn to follow instructions, but it learned to recognize different levels of abstraction. This is the ability to solve problems, the most appropriate way to solve the problem key.
The study eventually moved from the virtual environment into the real world, using a robot similar to Roomba, where 90% of the robot could respond to instructions in a second. Conversely, when the task's specificity is not recognized, it takes 20 seconds or more for the robot to complete or plan.
An application of this kind of new machine learning techniques mentioned in the paper is working in a warehouse environment of machine workers, but there are many areas can benefit from a more versatile machine, this machine can switch seamlessly between the specific operation and general task.
Other areas that might benefit from such systems include autonomous vehicles, assistive robotics, and medical robots." SingularityHub said in response to an e-mail letter.
There's more to look forward to
These findings help to achieve the ideal of humanoid robots that can see, hear, and act like humans. But this distance really creates humanoid robots, too
Reality, of course, is far from fantasy. In addition to the industrial environment, robots have not yet reached the level of robots in the Jason family. The public access to the robot but seems to be some big size plastic toys, they are set in advance to perform a series of tasks in the program, but not with the surrounding environment or their Creator for meaningful interaction.
In the words of PayPal co-founder and technology entrepreneur PeterThiel, "we wanted cool robots, but we got a hamburger robot Flippy with 140 word input restrictions."". But scientists are making progress, giving robots the same ability as humans to observe and respond to their surroundings.
This month, the field has made some new progress, in the annual conference and the Massachusetts Cambridge robot science and systems conference put forward new issues about robots, discusses some topics, including how to make robots more talkative, how to help the robot to understand fuzzy language, and help the robot in complex space observation in the navigation and etc..
Optimized vision
Duke University graduate BenBurchfiel and his mentor GeorgeKonidaris, assistant professor of computer science at Brown University, this paper proposes a new algorithm, can allow the machine to be more like humans from the perspective of the world.
In the paper, Burchfiel and Konidaris show how they can teach the robot object recognition and manipulation of three-dimensional objects as much as possible even if items may be covered or placed in a strange position, such as overturned teapot.
The researchers trained robot algorithms for 3D scans of about 4000 common household items, such as beds, chairs, tables, toilets, and so on. Then, they tested the robot's visual ability at the bird's perspective and identified the ability to observe 900 new 3D objects. Compared with other computer vision techniques, the accuracy rate of the algorithm is as high as 75%, with only 50% accuracy.
The researchers said they were not the first to study and train machines to categorize 3D objects. But unlike other studies, they restrict the space in which robots learn to classify objects.
"Imagine all the possible existence of objects in space," the researchers explained: "that is to say, if you have a mini Lego blocks, I tell you, you can stick them together to create something different. You can create a lot of things!"
This infinite possibility may eventually create an object that cannot be identified to humans or machines.
To solve this problem, researchers let their algorithms find a more limited space to accommodate the objects they would identify. By working in this finite space, mathematically, we call subspaces, greatly simplifying classification tasks. It is precisely because of the discovery of this space that researchers have different methods of the past.
Obey orders
At the same time, two undergraduates at Brown University have found a way to make robots better understand the concept of direction, even at different levels of abstraction.
The study, led by DilipArumugam and SiddharthKaramcheti, explores how to train robots to understand the nuances of natural language and how to follow instructions properly and efficiently.
"The problem is that commands can have different levels of abstraction, which can result in robots unable to plan their actions effectively, or they can't accomplish tasks at all." Arumugam says.
In this project, researchers provided instructions for moving virtual robots to online domains. The space consists of several rooms and a chair, and the robot is told to operate from one place to another. Volunteers send out instructions to robots, which are more general, such as "bringing chairs to a blue room" and a step by step instruction.
The researchers then used instruction databases to teach the system to understand different expressions. Not only did the machine learn to follow instructions, but it learned to recognize different levels of abstraction. This is the ability to solve problems, the most appropriate way to solve the problem key.
The study eventually moved from the virtual environment into the real world, using a robot similar to Roomba, where 90% of the robot could respond to instructions in a second. Conversely, when the task's specificity is not recognized, it takes 20 seconds or more for the robot to complete or plan.
An application of this kind of new machine learning techniques mentioned in the paper is working in a warehouse environment of machine workers, but there are many areas can benefit from a more versatile machine, this machine can switch seamlessly between the specific operation and general task.
Other areas that might benefit from such systems include autonomous vehicles, assistive robotics, and medical robots." SingularityHub said in response to an e-mail letter.
There's more to look forward to
These findings help to achieve the ideal of humanoid robots that can see, hear, and act like humans. But this distance really creates humanoid robots, too