A Doll Called Alica
The exam paper is 1 hour 15 minutes long and for HL students only There are four questions based on the pre-seen case study. (30 marks) 'A Doll called Alicia' is the 2018 case study for ITGS paper 3. This is for HL ITGS students only. The case study booklet focuses on a large software company called MAGS, which is looking to expand into the AI market. This could cover a wide range of areas, but the case study booklet narrows this down to the interactive toy market. This case study delves into several of the ITGS HL topics, including machine learning and neural networks. This case study is for May and November 2018 only. The case study for November 2017 was Wearable Technology.
The following key terms are highlighted as being relevant for 'A Doll Called Alicia'. Notes on each term are a collation of research from various sources:
ASSISTIVE TECHNOLOGYAssistive Technology refers to any product, device, or equipment, whether acquired commercially, modified or customized, that is used to maintain, increase, or improve the functional capabilities of individuals.
ACCOUNTABILITYAccountability refers to the individual or object that is responsible for a certain action.
AUTONOMYThe capacity of a system to make a decision about its actions without the involvement of another system or operator.
CONSUMER-GRADE ARTIFICIAL INTELLIGENCEConsumer-grade artificial intelligence includes technologies such as Siri, Alexa, Cortana and Google Assistant.
DATA PRIVACY AND PROTECTION PRINCIPLESPrivacy is the ability of individuals and groups to determine for themselves when, how and to what extent information about themselves is shared with others.
DIGITAL PROFILEA means in which we can analyse and piece together a person's interaction with a digital data network and produce the outputs of profiling, trend analysis, market statistics, behavioral analysis.
EMOTIONAL ARTIFICIAL INTELLIGENCEAs part of Deep Learning, this concept analysis the concepts of social and emotional intelligence as elements of human intelligence that are complementary to the intelligence assessed by the Turing Test. It is the implementation of Sensitive Artificial Listeners which provides a hands-on example of technology with some emotional and social skills.
HUMAN–COMPUTER INTERACTION (HCI)The study of how people interact with computers and to what extent computers are or are not developed for successful interaction with human beings.
NATURAL LANGUAGE PROCESSINGThe ability of a computer program to understand human speech as it is spoken. Most notable examples in society today are technologies such as Siri, Alexa, Cortana and Google Assistant.
PATTERN RECOGNITION PROTOCOLSThe imposition of identity on input data, such as speech, images, or a stream of text, by the recognition and delineation of patterns it contains and their relationships.
STANDARD DATA FORMATSA simple file format that uses fixed length fields. It is commonly used to transfer data between different programs. This becomes an issue for the case study on line 128 of the Technical Challenges "Mark wants to ensure that the AI software is compatible with future developments and standards, and can also be easily adapted for use with other products from MAGS and other companies."
TECHNOLOGICAL SINGULARITYThis happens when computers develop their own intelligence. It is a term coined by Vernor Vinge, the science fiction author, in 1983.
UNCANNY VALLEYUsed in reference to the phenomenon whereby a computer-generated figure or humanoid robot bearing a near-identical resemblance to a human being arouses a sense of unease or revulsion in the person viewing it. The uncanny valley phenomenon was put forth in an article in "Energy" in 1970 by Japanese robotics expert Masahiro Mori. But before that, Ernst Jentsch wrote about "the uncanny" in a 1906 essay, and Sigmund Freud followed up 13 years later. Yet the idea is largely based on anecdotes, and researchers such as Karl MacDorman, associate professor of human-computer interaction at Indiana University, are working on experiments to hone in on possible explanations. MacDorman briefly worked with Saygin in Japan. In his view, the uncanny valley effect has to do with a mismatch in features of a single animation or robot, with some parts appearing much more humanlike than others. For instance, when a very human-looking head is placed on an obviously mechanical body, that can be creepy. So can a human face with robotic eyes. "When there are elements that are both human or nonhuman, this mismatch can produce an eerie sensation in the brain," MacDorman said. "It's when different parts of the brain are coming to different conclusions at the same time." There are other factors that may play in, however. The uncanny valley effect could have to do with uncertainty about whether a robotic character is truly alive or dead, and even play into our deep-seated fears of death. Alternatively, it may be part of cognitive dissonance, which happens when a person's beliefs are not in line with their behaviors -- for instance, a smoker who berates other smokers.
From an evolutionary perspective, humans have developed an aversion to sickness, and a creepy-looking almost-human might tap into our internal system that warns us against sources of disease. In relation, we evolved to choose mates who are healthy, and weird robots may set off the same warning bells that told our ancestors to stay away from unfit sexual partners. MacDorman's current focus is on the uncanny valley with respect to empathy: that is, is the uncanny valley phenomenon related to a person's difficulty in identifying with particular computer-animated or robotic characters in films? Does it relate to the impression that these characters are somehow "soulless," and in what ways? Saygin's ongoing studies make use of electroencephalography, or EEG, which measures electrical activity along the scalp. While fMRI tells where in the brain activity occurs, EEG is better for looking at when -- that is, when in viewing agents with different degrees of humanness do people's brain patterns change. EEG is also much more portable and less expensive. Rather than a big scanner, it involves a cap worn on a person's head. Researchers may be able to understand the EEG patterns associated with the uncanny valley effect, and people's comfort with various robotic forms. Eventually, this information could be used to help robot developers or animators who don't want their creations to scare people.
"Instead of asking somebody, 'Do you like this robot?' we could get that information a lot more directly, and faster perhaps, if we can develop these technologies," she said.
It is not necessary to investigate the technical aspects of artificial intelligence (AI) beyond the depth outlined in the Case Study.
Any individuals named in the case study are fictitious and any similarities with actual entities are purely coincidental.