How do we know that artificial intelligence has a sense?

in #ai5 years ago

One of the famous questions of the Australian philosopher David Chalmers is whether it is possible to imagine "philosophical zombies" - those who behave like you and me but lack subjective experience. presence? This problem has aroused many scholars' interest in consciousness, including me. The reason is that if such a zombie (or a sophisticated but unfeeling robot) might exist, then the physical experience—the brain or a brain-like mechanism—can't explain the conscious experience.
u=199854784,1810801058&fm=26&gp=0.jpg
Instead, we must consider some additional spiritual attributes to explain what is a conscious feeling. To figure out how these psychological attributes are produced, it becomes the so-called "difficult problem of consciousness."

However, for Chalmers' philosophy zombies, I have a small problem. The philosophical zombie should be able to raise any questions about the experiential attributes. However, it is worth pondering how a person or machine that lacks experience can recall the experience it has never had. In a episode of the podcast "Making Sense" (formerly known as "Waking Up"), Chalmers discussed the issue with neuroscientist and writer Sam Harris. "I think it's not particularly difficult to imagine at least one system that can do this," Chalmers said. "I mean, I am talking to you now, and you are making a lot of comments about consciousness. These comments seem to be very Strongly show that you are conscious. However, I can at least consider the idea that you are not aware that you are actually a zombie, and that you have all the noise and you don’t have any consciousness at the same time."

This is not a strict academic issue. If Google's DeepMind developed an artificial intelligence, it began asking "why red feels like red, not something else", then there are a few possible explanations. Maybe it heard this question from someone else. This is possible. For example, artificial intelligence may only need to simply read a paper about consciousness, and you can learn to ask questions about consciousness. It may also be programmed to ask this question, just like a character in a video game. Or, it may have such a problem from random noise. Obviously, raising the issue of consciousness alone does not explain anything. But if you don't hear these questions from other sources, or don't have enough random output, will an artificial intelligence zombie conceive this problem? For me, the answer is obviously no. If I am right, then we should seriously consider that when an artificial intelligence spontaneously raises questions about subjective experiences, it is likely to be conscious. Because we don't know whether it is ethical to unplug its power supply without determining whether an artificial intelligence is conscious. We'd better start listening to these questions now.

Our conscious experience consists of quale. Sensibility is the subjective aspect of sensory perception, such as red in red and sweet in sweetness. The sensibility of constitutive experience is not simplistic and cannot be mapped to anything else. If I am born blind, then no one can make me feel the color feeling shared by blood and roses, no matter how clear the description. The same is true of me, even though I am one of the many blind people who developed the ability to blindly – ​​despite being blind but able to avoid obstacles and accurately guess where objects appear on the computer display.

Blindness seems to indicate that some behaviors can be completely mechanized, that is, certain behaviors can be made without any subjective consciousness, which responds to Chalmers' concept of philosophical zombies. The brain of the blind viewer seems to utilize the pre-conscious area of ​​the visual system to produce visual behavior without a visual experience. This often happens after a person has suffered a stroke or other visual cortical damage. The visual cortex is the cerebral cortex that processes visual information. Since the human eye is still healthy at this time, the eye may provide information hidden in consciousness to a specific brain region, such as superior colliculus.

For the same reason, there are also a few cases in which deaf people have hearing. A report published in the journal Philosophical Psychology in 2017 details a case in which a male patient, called LS, is able to distinguish between different voices based on content, despite being born with deafness. For people like LS, this ability to distinguish is produced in silence. However, if a monk asks a question similar to that raised by a normal person, such as "Isn't that sound sound strange?" then we have good reason to doubt whether this person is really embarrassed (we cannot be completely sure) Because this problem may be just a prank). Similarly, if an artificial intelligence begins to spontaneously raise questions that only a conscious person can ask, then we will reasonably generate similar suspicions: Is the subjective experience already online?

In the 21st century, we urgently need to conduct a Turing test on consciousness. Artificial intelligence is learning how to drive a vehicle, diagnose lung cancer, and write its own computer programs. Intelligent conversations may occur in a decade or two, and future super artificial intelligence will not live in a vacuum. It will have access to the Internet and read the writings of Chalmers and other philosophers on issues such as sensibility and consciousness. However, if technology companies can beta test artificial intelligence on the local intranet and isolate this information, they can conduct Turing test-style interviews to detect whether the quality-related questions are meaningful to artificial intelligence.

What problems do we ask in the face of a potential silicon-based idea? For questions such as "What if my red is your blue?" or "Is there a color that is greener than green?", the answers given by artificial intelligence should give us a lot of insight into their spiritual experience ( Or lack of mental experience). An artificial intelligence with a visual experience may think about the possibility of these problems, perhaps answering it. "Yes, I sometimes wonder if there is still a color that can mix red fever with blue coldness. Together, on the other hand, artificial intelligence lacking visual sensibility may answer, “That is impossible. Red, green and blue each exist at different wavelengths.” Even if artificial intelligence tries to freely play or deceive us, For example, the answer is "Interesting, what if my red is your hamburger?" This shows that it did not catch the focus.

Of course, artificial consciousness may also have a very different sensibility from ourselves. In this context, questions about specific sensitivities (such as color perception) may not be able to touch artificial intelligence. However, questions about the more abstract nature of the sensibility itself may be able to screen out philosophical zombies. For this reason, the best question is likely to be "a difficult problem of consciousness" itself: Why can consciousness exist? Why do you experience the sensation when dealing with the information you enter in the world around you? If artificial intelligence thinks the problem makes sense, then we are likely to find artificial consciousness. However, if artificial intelligence clearly does not understand concepts such as “consciousness” and “sensitivity”, then evidence about its inner spiritual life does not exist.

Building a “detector” of consciousness is not trivial. In addition to such Turing tests, future researchers may apply today's abstract consciousness theory to try to infer the existence of consciousness from the computer's wiring diagram. One such theory considers the amount of information integrated by the brain or other systems and has been used to infer whether a patient with brain damage is conscious or even to infer whether the fish population is conscious. In fact, the need to detect consciousness in patients with brain injury has broken through scientific taboos before the large amount of financial support for research that is motivated to detect artificial consciousness.

My lab, led by Martin Monti of the University of California, Los Angeles, is committed to improving the brains of patients with brain damage by developing better means to infer consciousness from brain electrophysiological or metabolic activities. life. When we unplug the survival devices of those who are conscious but not responding, there will be an ethical tragedy; and if we unplug the power of artificial consciousness, the same tragedy will occur. Just as my work at the UCLA lab is to link theoretical consciousness to the clinical behavior of patients with brain damage, and future researchers must also measure artificial consciousness with artificial intelligence. The performance in a certain Turing test is linked. At that time, when we no longer need textbooks, we still need to think about the problem that zombies can't answer.

Sort:  

Thanks for your 2000.22SP to team-cn! Your post has earned 25% team-cn upvotes!

Thank you so much for participating in the Partiko Delegation Plan Round 1! We really appreciate your support! As part of the delegation benefits, we just gave you a 3.00% upvote! Together, let’s change the world!

Congratulations @wanggang! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You published more than 50 posts. Your next target is to reach 60 posts.

You can view your badges on your Steem Board and compare to others on the Steem Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Vote for @Steemitboard as a witness to get one more award and increased upvotes!

Coin Marketplace

STEEM 0.29
TRX 0.12
JST 0.033
BTC 62559.43
ETH 3092.10
USDT 1.00
SBD 3.86