blog




  • Essay / The possibilities of Siri to be an intelligent social agent

    Table of contentsIntroductionThe Chinese room experimentMemories and storiesPlaced in a social frameworkConclusionIntroductionThis article revolves around a conversation with the iPhone chatbot Siri. The main objective of the article is to examine the possibilities for Siri to become an Intelligent Social Agent (SIA) or whether the chatbot is not intelligent. To do this, a transcription between Siri and a human will be analyzed. To interpret the transcript, the article “The Art of Designing Socially Intelligent Agents: Science, Fiction and the Human in the Loop” by Kerstin Dautenhahn will be used. The article examines the approach to designing social and intelligent agents. The author explains how new forms of interaction can arise from the human tool interface. It reveals that intelligent social agents can control their cognitive and social potential. Social agent technology combined with human social cognition can give rise to new forms of sociality. As such, it can be established that the continued development of intelligent social agents can bridge the gap between humans and agents. Although this article argues that machines can be intelligent social agents, the article will also imply a theory that machines are not intelligent social agents. social and intelligent agents. This theory is based on an experiment called “The Chinese Room Experiment” and will be explained below. A small part of the article “When a robot is social: spatial arrangements and multimodal semiotic engagement in the practice of social robotics” by M. Alac, J. Movellan and F. Tanaka will also be used. Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”? Get the original essay The Chinese Room Experiment The Chinese Room Experiment is an experiment carried out by American philosopher John Searle. The general idea is that a man is inside a room and outside the room are Chinese people. The man inside the room doesn't understand any Chinese, but he has an English instruction manual that gives him some codes for answering a Chinese question in Chinese. When the Chinese outside the room send Chinese symbols, the man inside the room can send new Chinese symbols in response. To the Chinese people outside the room, it seems that the man inside the room knows Chinese. The point of the experiment is that it appears that the man inside the room knows Chinese, but in reality he doesn't know any Chinese. He just has an instruction manual that tells him some codes, so he knows how to respond, but he doesn't understand what it's saying. The man inside the room is an analogy to a machine. We might think that the machine has meanings, but it is just a computer that follows rules (Searle, 1984). Dautenhahn's (1998) article states that it is human nature to interact appropriately with forms that are on their level. Therefore, a robot or software with human characteristics qualifies as a desirable agent (Dautenhahn 1998: 8). According to cognitive technology, designing a desirable AIS requires examining the interaction between biological and physical agents. When it comes to agent-human interaction, it is necessary to have a cognitive fit between the AIS and the human. Accordingly, a desirable AIS borrows characteristics of social traits and human intelligence, such as embodiment, memories, credibility, narrative construction as well as historical foundations. All this if the above attributescontribute to having an AIS developed in terms of intelligence and therefore a desirable agent (ibid: 8). The following analysis will take a closer look at embodiment, credibility, historical basis, and the ability to have memories in association with transcription with Siri. Humans are used to being in bodies (Dautenhahn, 1998). Anything that does not exist in bodily form is a bit strange to human understanding. In fact, most people don't believe in things they can't see. Therefore, according to Dautenhahn, one of the important aspects of a fully functioning AIS is having a body that will help it function like human beings. An embodied AIS is better placed to respond to human and social needs (ibid: 9). The article by Alac, Movellan and Tanaka (2011) also confirms this assertion. He says robots can integrate into society if they become more like humans. According to the authors, to become a readable social factor, robots should have a body. The timing of robotic movement, along with a computational mechanism, plays an important role in creating a readable social factor. A readable social factor is one that acts beyond physical activities. Therefore, when designing a robot, it must have features that attempt to humanize it. A social robot should have human features such as head, hands, legs, to name a few (Alac, Movellan & Tanaka 2011: 894). With this in mind, Siri fails to be a well-designed social robot. He has no hands, no legs, no head, and he doesn't look like a human being. Siri is a talking mobile phone. Because it can make sounds and talk, it could be argued that Siri has a mouth. However, it is still not an embodied SIA according to the theories of Dautenhahn and Alac, Movellan and Tanaka. However, Siri tries to incarnate herself into the interaction when she says: M: What are you doing? S: I look out the window and write some haikus. To do this, Siri must have human organs such as hands and legs. This also makes Siri more like a human being, as he says he does human things that would require him to have a body that would help him function like a human being. In this case, it allows Siri to incarnate, and this could be because Siri is speaking through a cell phone. A cell phone allows human beings to talk to other people, even if it is not possible to see their bodies. If a friend calls and says that he "looks out the window and writes haikus", the phone user will not question whether this is correct or not, because the friend is known to be in another place fully embodied. Therefore, Siri can trick humans into believing that they are incarnate, using the phrase “I look out of my windows (…)”. Indeed, it is natural for the human mind that people talking on cell phones are embodied. Memories and stories According to Dautenhahn, for a social robot to be intelligent, it is crucial that it has stories to tell. The article argues that stories provide a platform for coping with enormous human responsibilities for social interaction (Dautenhahn, 1998). In this light, Siri fails to be an intelligent social agent. From the given transcript, it is notable that Siri lacks stories. Siri is asked three times to say something about himself, and although she responds in different ways, she answers almost the same thing each time: "I'm Siri, but enough about me," "I'm not than a humble virtual assistant. and “I am Siri, your virtual assistant”. The first two times he said something different, and the third time heputs the two things in the same sentence. The story of Siri is that it's called Siri and it's a humble virtual assistant. What's interesting is that when asked to tell more about himself, he responds: "That may be beyond my ability at the moment." » However, Siri says something more about himself later in the conversation. When asked if he can be the human's friend, he replies: "You've been my friend since day one, Julie. " This says a lot about Siri's life story because it refers to the temporal aspect of her life by saying "since day one." It seems odd that Siri first says he's not capable of saying anything more about himself, and later in the conversation he says more about himself. In this case, it is possible to argue that Siri is not intelligent using the theory from The Chinese Room.Experiment. It seems Siri doesn't understand what he's saying when asked to tell more about himself, as he answers the question incorrectly. This may be because the chatbot is programmed to respond "this may be beyond my capabilities at the moment" if the codes "more" and "yourself" (or something like that) are in the same sentence. It seems almost impossible that Siri understands the phrase "tell me something more about yourself" when he says no, and later in the conversation says something more about himself. Using this argument, Siri is not an intelligent social agent in this case. Another important thing for social robots to be intelligent is the ability to have memories. Memory in a robot is crucial to further humanize it. Being able to use a name or refer to a past event is essential for human interaction (Dautenhahn, 1998). Siri certainly has a memory when it comes to using a name. He uses the name "Julie" several times in the interaction. He also seems to have some memory in the transcription of: M: Okay, tell me something more about yourself S: I'm just a humble virtual assistant M: Tell me something something more about yourself S: This is perhaps beyond my abilities. In this case, Siri remembers what the human just asked so as not to respond with the same thing. However, it has just been argued above that the response "maybe this is beyond my ability" may be a programmed response due to some codes in the human's question, if this is the case, Siri has no no memory. Another moment in the interaction where it seems like Siri has no memory is in the transcription of :M: No, it doesn't matter, but I hope you can help me with somethingS : Who, me?M: Yes, youS: Yeah. It's me. The human starts asking Siri if he can help with something, and it seems like Siri needs to be sure that this is a problem that concerns him. The human then says yes, and Siri is then expected to remember the first question so that it can, for example, ask how it can help. However, Siri responds in a way that seems strange to humans, and it appears that Siri does not remember what the conversation was about. Again, the argument from The Chinese Room Experiment can be used. In this case, Siri does not appear to be an intelligent agent. Rather, it seems that he is programmed to answer the question based on certain codes or rules without having a real understanding of the conversation and what is happening. There is a point in the conversation where it would have been interesting to test Siri's ability to have memories. That's when the human asks about Los Angeles :M: I'm going to Los Angeles this weekend. Do you know what I should see in Los Angeles?S: Here's what I found on the web for "Do you know what I shouldsee in Los Angeles?M: Do you know where I can eat something good in Los Angeles?S: Yes, I do, Julie. The highest rated option I found is Sushi Gen, a sushi restaurant, which averages 4.5 stars and is rather expensive. So far, the main argument is that Siri is not intelligent because all the answers seem to fit the argument of The Chinese Room. Experience. In this interaction, it would have been interesting if the human did not use the words "Los Angeles" in the second question to see if Siri was able to remember that the first question was about Los Angeles. The human could have asked “do you know where I can eat something good in town?” When the words "Los Angeles" are used, it may just be a code for Siri to use. It's possible that Siri is programmed to respond a certain way when the words "eat" and "Los Angeles" are placed in the same sentence. According to (Dautenhahn, 1998), the human social circle is oriented towards what is credible. than what is not. Today, scientists are coming up with systems that help AIS communicate with human audiences, giving it credibility. Therefore, developing a credible AIS makes the technology familiar to people. By being credible, AIS responds to human social and cognitive needs (Dautenhahn 1998: 7). As previously stated, humans interact best with people, things, or forms that are familiar to them. AIS that can present realistic shapes are interesting during engagement. Dautenhahn predicts that as humans, they will always be attracted to forms that display characteristics such as life, movement, speech, emotions, etc., because they resemble life (ibid.) . According to various researchers, humans are more attracted to credible agents because they are in touch with reality. In fact, critics claim that credible agents deceive humans into believing they are caring and encouraging them, when this is simply an exaggerated program (ibid. .). All the things mentioned above are part of this credibility. If the robot has memories, stories and an incarnation, it is more credible to humans and it appears more like an intelligent social agent. The first thing that makes Siri believable and human is that the machine has a name. It's called Siri, which makes the machine more personal. It's much easier to connect to something with a name because it feels more real to humans. Although Siri has a name that makes it human-like, Siri presents itself as a “virtual” assistant. The fact that it is called “virtual” creates a distance with the machine, because it suddenly appears as something that is not a human. If Siri described herself solely as an assistant, Siri would be more like a human. Another thing that makes Siri look human is the transcription of: M: What are you doing? S: I look out the window and write some haikus. Here, Siri speaks like a person with a life rather than just software in a phone. Also, when Sri responds "I feel good", it seems more human because it shows that he has feelings. However, there are some parts of the conversation where Siri is not credible. The full transcription of:M: I'm doing a school assignment on youS: I don't know what you mean by 'I'm doing a school assignment on you'. How about a web search?M: No, it doesn't matter, but I hope you can help me with somethingS: Who, me?M: Yes, youS: Yeah , it's meM: Can you help me with something?S: (silence, no response)M: I can't hear youS: If you can't, you can'tThis.