This is a popular topic for science fiction movies. Advanced computerised machines that resemble human beings in appearance and in mind. Some of these movies include iRobot, The Terminator movies, and more recently Ex Machina.
What is intelligence?
This is a very difficult question to define. Ask a sample of people and they might say 'the ability to respond to the environment', or 'the ability to learn new knowledge or skills', using logic, using reasoning, drawing conclusions, learning from experience or making evaluations or judgements.
There is no single answer to the question, it is as hard to define as it is to create. Many developers of AI and robotics have chosen to develop machines that imitate the behaviour of humans instead. It is very difficult to do the easy things (E.g. pick up a pen and walk up some steps), and it's very easy to do the hard things (E.g. complex mathematical functions). (Steven Pinker)
In 1950, a famous mathematician and computer scientist called Alan Turing came up with an experiment to test the intelligence of machines. Called the Turing Test. A human asks two contestants (one of which is a computer) without knowing which is which, some questions. If the human is unable to determine which is the computer, then Turing suggested that the machine could be considered 'intelligent'.
A contemporary form of the Turing test can be seen when posting on forums or trying to sign up for new accounts online. Its called the CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) and it is designed to prevent spam bots from posting adverts and creating fake accounts. Google has also implemented 'reCAPTCHA' which involves clicking a checkbox or choosing image squares that contain items to prove that you are not a bot.
Artificial Intelligence (AI) - systems that simulate intelligence through strict facts or rules. The focus of AI is to give the impression of human-like intelligence. In other words, AI is focused on the results, not the method. AI usually focuses on one particular area of knowledge such as playing a game rather than a general intelligence that can be adaptable for any situation.
Computational Intelligence (CI) - Computational intelligence focuses on creating systems that 'think' the way humans think. It looks at systems that can learn, develop and reach improved solutions based upon previous experiences or results. CI looks at the method by which the end results are achieved.
One useful thought experiment to consider is called 'The Chinese Room'. It looks at the concepts of machine intelligence, knowledge and understanding. The experiment supposes that a room has a human inside who can only speak English, but in the room are books that explain how to speak Chinese. People come to the door and post questions that are written in Chinese. The person inside searches the books for the correct response to the question in Chinese then writes the answer on the paper and posts it back under the door. The people outside the room, having posted a question in Chinese and received an answer in Chinese, might assume they are having a conversation with someone who knows and understands Chinese. However, one could argue that the person (and therefore any machine that does the same) does not understand Chinese as they dont know the meaning of the individual symbols being manipulated. Who knows Chinese here? The Room? The person? The books containing the symbols?
Expert Systems are an early form of artificial intelligence. Software programs that use logic and rules in order to attaempt to make the same (usually) decisions as a human expert. They would be focused usually on one knowledge domain. Examples of Expert Systems include:
- Medical diagnosis
- Spelling and grammar checking in word processors
- Finance - deciding whether to approve a loan or not
- Fault diagnosis in several field (cars, computers, aircraft)
Expert systems are made up of the following: 1) User Interface 2) Knowledge Base (populated by a Knowledge Engineer) 3) An Inference Engine.
Expert systems are programmed with a set of logical rules to find a solution. The most basic use Boolean logic decision trees to arrive at conclusions. Boolean logic has two different values, either a yes or a no, a 0 or a 1 in the same way that a circuit board has switches that are either on or off. The image shows a simple example of how a boolean logic tree might work:
Inference Rules are written as IF....THEN statements. They are commonly referred to in the modern era as IFTTT (If This Then That); many modern digital assistants follow this logic (Amazon Echo, Google Assistant, Siri). In the field of Artificial Intelligence, inference engine is a component of the system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world (one particular domain usually). The inference engine applies logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.
Robots are machines, packed with sensors for input, and motors or lights or screens for output, capable of carrying out a complex series of actions automatically. Robots can be autonomous or semi autonomous and range from humanoid robots such as Honda's ASIMO, or Saudi Arabia's newest Robot citizen 'Sophia', to industrial robots, to medical assisting robots to swarming robots to UAV drones and even microscopic nano robots.
Designers of robots currently face many challenges such as from an aesthetic perspective in terms of deciding whether to make robots look human-like and venture into 'Uncanny Valley' -
Used in reference to the phenomenon whereby a computer-generated figure or humanoid robot bearing a near-identical resemblance to a human being arouses a sense of unease or revulsion in the person viewing it.
The uncanny valley effect has to do with a mismatch in features of a single animation or robot, with some parts appearing much more humanlike than others. For instance, when a very human-looking head is placed on an obviously mechanical body, that can be creepy. So can a human face with robotic eyes.
"When there are elements that are both human or nonhuman, this mismatch can produce an eerie sensation in the brain," MacDorman said. "It's when different parts of the brain are coming to different conclusions at the same time."
There are other factors that may play in, however.
The uncanny valley effect could have to do with uncertainty about whether a robotic character is truly alive or dead, and even play into our deep-seated fears of death. Alternatively, it may be part of cognitive dissonance, which happens when a person's beliefs are not in line with their behaviors -- for instance, a smoker who berates other smokers.
The other issues designers face is fully understanding the tasks facing the robot so that those processes may be broken down and programmed as accurately as possible in order to prevent error or in a worst case scenario, human injury or death.
Social Impacts - the social impacts of robots are several. First of all, some members of society who are affected by 'Uncanny Valley' are quite scared to interact with and use the robots. Secondly, as robots can perform tasks without needing to be paid, without needing regular breaks because of fatigue and can essentially work 24/7, as well as being extremely precise and accurate in executing instructions consistently, they have replaced many manual human workers in several industries. For example, robot usage is very high in countries with a strong automotive industry. In Japan, there are 1,562 industrial robots installed per 10,000 automotive employees. (https://www.statista.com/topics/1476/industrial-robots/, 2016). Another factor is the law; If a robot causes damage or is responsible for human injury or loss of life, who is responsible? Until recently, it was a very grey area but the law is now catching up in terms of prosecuting owners of robots that caused the issues.
Ethical implications of developing intelligent systems
- Unemployment - what happens after the end of jobs?
- Inequality. How do we distribute the wealth created by machines?
- Humanity. How do machines affect our behaviour and interaction?
- Artificial stupidity. How can we guard against mistakes?
- Racist robots. How do we eliminate AI bias?
- Security. How do we keep AI safe from adversaries?
- Evil genies. How do we protect against unintended consequences?
- Singularity. How do we stay in control of a complex intelligent system?
- Robot rights. How do we define the humane treatment of AI?