That emotional response fits in with the many, many experiments that have repeatedly shown the strength of the human tendency toward animism: attributing a soul to the objects around us, especially those we are most fond of or that have a minimal ability to interact with the world around them. Many people were upset by this and called for a stop to it (and parody videos flourished). The footage showed technicians shoving or kicking the machines to demonstrate the robots’ great ability to remain balanced. “At present,” Iannetti says, “it is impossible to demonstrate this form of consciousness unequivocally even in humans.” To estimate the state of consciousness in people, “we have only neurophysiological measures-for example, the complexity of brain activity in response to external stimuli.” And these signs only allow researchers to infer the state of consciousness based on outside measurements.Ībout a decade ago engineers at Boston Dynamics began posting videos online of the first incredible tests of their robots. “If we refer to the capacity that Lemoine ascribed to LaMDA-that is, the ability to become aware of its own existence (‘become aware of its own existence’ is a consciousness defined in the ‘high sense,’ or metacognitione), there is no ‘metric’ to say that an AI system has this property.” The awareness of being conscious can disappear-for example, in people with dementia or in dreams-but this does not mean that the ability to have subjective experiences also disappears. For some, it is being aware of having subjective experiences, what is called metacognition (Iannetti prefers the Latin term metacognitione), or thinking about thinking. “There is a lively debate about how to define consciousness,” Iannetti continues. “What do we mean by ‘sentient’? the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?” “First of all, it is essential to understand terminologies, because one of the great obstacles in scientific progress-and in neuroscience in particular-is the lack of precision of language, the failure to explain as exactly as possible what we mean by a certain word,” says Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London. Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient. “That said, I confess that reading the text exchanges between LaMDA and Lemoine made quite an impression on me!” Scilingo adds. Indeed, it is no longer a rarity to interact in a very normal way on the Web with users who are not actually human-just open the chat box on almost any large consumer Web site. Piaggio at the University of Pisa in Italy. On the other hand, we are talking about an algorithm designed to do exactly that”-to sound like a person-says Enzo Pasquale Scilingo, a bioengineer at the Research Center E. “I was surprised by the hype around this news. But his story has had the virtue of renewing a broad ethical debate that is certainly not over yet. Many technical experts in the AI field have criticized Lemoine’s statements and questioned their scientific correctness. And he insists that it has a right be recognized-so much so that he has been the go-between in connecting the algorithm with a lawyer. Lemoine said he considers LaMDA to be his “colleague” and a “person,” even if not a human. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm-and Google placed him on administrative leave. In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. These led him to ask if the software program is sentient. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. His experience with the program, described in a recent Washington Post article, caused quite a stir. Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.” “I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an “ interview” conducted by engineer Blake Lemoine and one of his colleagues.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |