News classification
Contact us
- Add: No. 9, North Fourth Ring Road, Haidian District, Beijing. It mainly includes face recognition, living detection, ID card recognition, bank card recognition, business card recognition, license plate recognition, OCR recognition, and intelligent recognition technology.
- Tel: 13146317170 廖经理
- Fax:
- Email: 398017534@qq.com
With the development of artificial intelligence, more rights should be given to AI
With the development of artificial intelligence, more rights should be given to AI
Now, you may not be suspicious of Apple's virtual assistant Siri, Amazon's Alexa, or Microsoft's Cortana. These applications are only imitative of human assistants, but obviously not human beings. We realize that there could be no one at home in the future under the influence of complex software.
Sophia Sophia
But these are not all human rights. Christine Andrews, a philosopher at York University in Toronto, Canada, said, "if you have a computer or robot that can act autonomously like humans or have self knowledge, I don't think we can say that it is not a single person."
This leads to a series of difficult problems. How do we look at robots with a certain level of understanding? How should we treat it if we are convinced that artificial intelligence will be able to bear emotional pain, or feel real pain? So it's the same murder as to turn it off?
Robot vs ape
In today's society, before discussing this problem, we can compare the rights that animals can have today. Animal rights advocates are constantly pushing to reassess the legal position of certain animals, especially the apes. Chimpanzee, gorilla and orangutan should be regarded as autonomous human beings like coral springs and Florida's non-profit nonhuman rights projects, not just the wealth of the zoo.
Stephen Wise, the head of the organization's legal team, says that the same logic applies to any entity with self-governing ability, no matter whether it can exist or not. He said that if one day we have robots with perceptive abilities, we should have the same moral and legal obligations to them, just as we are striving for the formal empowerment of non human animal rights.
Of course, deciding which machine is worth our moral thinking is very difficult. Because we often cast human thoughts and feelings onto the inanimate entities, we will eventually draw the same sympathy to those entities that have no thoughts or feelings.
Think of Spot, a breed of dog like robots developed by Boston power company. Earlier this year, Massachusetts based technology company in Waltham released a video in the video show in the play staff four machine. The idea was supposed to show Spot's good equilibrium performance. But some people think it's similar to the behavior of an animal. PETA related people issued a statement saying that the behavior of Spot was "inappropriate".
Kate Darling, a media lab researcher at Massachusetts Institute of Technology in Cambridge, Massachusetts, also found similar situations when he studied the interaction between people and the toy dinosaur robot PLEO. Pleo looks like a human being, and we see it is obviously just a toy. But the programming behavior and way of speaking in the body does not only show the intelligence it has, but also suggests that it can experience pain. For example, if you take the Pleo down and take it up, it will whimper to ask you to stop.
To understand our understanding of the extension of sympathy to simple robots, Darin encouraged participants in the recent seminar to play with Pleo, and finally let them destroy it. At that time, almost everyone turned off the plan. Darin said, "though we are fully aware that they are not real, both at the level of cognition and rationality, people once recognized robots as living things.
Although neither Pleo nor Spot feels pain, Darin thinks it is necessary for us to pay attention to how to look at these entities in the future. She said: "if we take the violence against them uneasy, or if you feel what's wrong with the central, perhaps this is our compassion, and we don't want to admit it, because it may affect the way we look at other creatures." This is a key question put forward in the TV drama western world. On this issue, the guests in the theme park are encouraged to deal with super realistic lifelike people.
Dialogue with robots
As far as you are concerned, only you are all of it, Pleo or any existing robot is not a meritorious service. But if we really have a certain way of understanding the issues that are discussed above, how will the preferential treatment of the robot be treated? At first, how can we know if a machine can have ideas?
Half a century ago, Alan Turing, the pioneer of computer science, considered this problem. From Turing's point of view, we can never be sure that machines can have similar feelings and experiences with humans. So our best way is to see if robots can continue to stop talking like humans.
Considering the complexity of human dialogue, it is an arduous task to build a robot that can stop the long and normal oral communication. But if we can create such a machine, map