Sunday 10 February 2019

Artificial Intelligence - introduction


Artificial Intelligence

UNIT – I


1.0) Introduction

What exactly is artificial intelligence? Artificial Intelligence (Al) is the study of how to make computers do things, which, at the moment, people do better. This definition is, of course, somewhat ephemeral because of its reference to the current state of computer science. And it fails to include some areas of potentially very large impact, namely problems that cannot now be solved well by either computers or people. But it provides a good outline of what constitutes artificial intelligence and it avoids the philosophical issues that dominate attempts is define the meaning of either artificial or intelligence. Interestingly, though, it suggests a similarity with philosophy at the same time it is avoiding it. Philosophy has always been the study of those branches of knowledge that were so poorly understood that they had not yet become separate disciplines in their own right. As fields such as mathematics or physics became more advanced, they broke off from philosophy. Perhaps if Al succeeds it can reduce itself to the empty set.

1.1) Objectives
The Objective of this unit is to cover the following topics:
Ø  History of AI
Ø  AI Problems And Techniques
Ø  Problem Spaces and Searches
Ø  Heuristic Search Techniques
Ø  Game Playing
Alpha Beta Pruning

1.2) Content

1.2.1 Introduction


            Artificial Intelligence has become more relevant today as man tries to grapple with quite large problems. This field is promising as it provides avenues for maving reasonable decisions and tries to emulate the human mind.

1.2.2 Foundation and History of AI


The history of research in artificial intelligence is a fascinating story, related by Pamela McCorduck (1979) in her book Machines Who Think. Because almost all of what we call Al has been developed ,over the last 30 years, McCorduck was able to conduct her research for the book by actually interviewing almost all of the people whose work was influential in forming the field.

The major journal of Al research is called simply Artificial Intelligence. In addition, Cognitive Science is devoted to papers dealing with the overlapping areas of psychology, linguistics, and artificial intelligence. Ai Magazine is a more ephemeral, less techni­cal magazine that is published by the American Association for Artificial intelligence (AAAI). IEEE Expert land several other journals publish papers about expert systems in a wide variety of application domains.

Since 1969, there has been a major Al conference, the International Joint Conference on Artificial Intelligence (IJCAI), held every two years. The proceedings of these conferences give a good picture of the work that was taking place at the time. The other important Al conference, held three out of every four years starting in 1980, is sponsored by the AAAI, and its proceedings, too, are published.

1.2.3 The Al Problems


What then are some of the problems contained within Al? Much of the early work in the field focused on formal tasks, such as game playing and theorem proving. Samuel wrote a checkers-playing program that not only played games with opponents but also used its experience at those games to improve its later performance. Chess also received a good deal of attention.

The Logic Theorist was an early attempt to prove mathematical theorems. Game playing and theorem proving share the property that people who do them well are considered to be displaying intelligence. Despite this, it appeared initially that computers could perform well at those tasks simply by being fast at exploring a large number of solution paths and then selecting the best one. It was thought that this process required very little knowledge and could therefore be programmed easily. As we will see later, this assumption turned out to be false since no computer is fast enough to overcome the combinatorial explosion generated by most problems.

Another early foray into Al focused on the sort of problem solving that we do every day when we decide how to get to work in the morning, often called commonsense reasoning. It includes reasoning about physical objects and their relationships to each other (e.g., an object can be in only one place at a time), as well as reasoning about actions and their consequences (e.g., if you let go of something, it will fall to the floor and maybe break). To investigate this sort of reasoning, Newell, Shaw, and Simon built the General Problem Solver (GPS), which they applied to several commonsense tasks as well as to the problem of performing symbolic manipulations of logical expressions. Again, no attempt was made to create a program with a large amount of knowledge about a particular problem domain. Only quite simple tasks were selected.

As Al research progressed and techniques for handling larger amounts of world knowledge were developed, some progress was made on the tasks just described and new tasks could reasonably be attempted. These include perception (Vision and speech), natural language understanding, and problem solving in specialized domains such as medical diagnosis and chemical analysis.

Perception of the world around us is crucial to our survival. Animals with much less intelligence than people are capable of more sophisticated visual perception than are current machines. Perceptual tasks are difficult because they involve analog (rather than digital) signals; the signals are typically very noisy and usually a large number of things (some of which may be partially obscuring others) must be perceived at once.

The ability to use language to communicate a wide variety of ideas is perhaps the most important thing that separates humans from the other animals. The problem of understanding spoken language is a perceptual problem and is hard to solve for the reasons. But suppose we simplify the problem by restricting it to written language. This problem, usually referred to as natural language understanding, is still extremely difficult. In order to understand

Mundane Tasks

Ø  Perception
                — Vision
                — Speech

Ø  Natural language
                — Understanding
                — Generation
                — Translation

Ø  Commonsense reasoning
Ø  Robot control
                                               
                                                 Formal Tasks

Ø  Games
                — Chess
                — Backgammon
                — Checkers
    Go
Ø  Mathematics
                — Geometry
                — Logic
                — Integral calculus
                — Proving properties of programs
                                             
                                                Expert Tasks

Ø  Engineering
                — Design
                — Fault finding
    Manufacturing planning

Ø  Scientific analysis
Ø  Medical diagnosis
Ø  Financial analysis
Figure 1.1: Some of the Task Domains of Artificial Intelligence

sentences about a topic, it is necessary to know not only a lot about the language itself (its vocabulary and grammar) but also a good deal about the topic so that unstated assumptions can be recognized. In addition to these mundane tasks, many people can also perform one or maybe more specialized tasks in which carefully acquired expertise is necessary. Examples of such tasks include engineering design, scientific discovery, medical diagnosis, and financial planning. Programs that can solve problems in these domains also fall under the aegis of Artificial intelligence. Figure 1.1 lists some of the tasks that are the targets of work in Al.

A person who knows how to perform tasks from several of the categories shown in the figure learns the necessary skills in a standard order. First perceptual, linguistic, and commonsense skills are learned. Later (and of course for some people, never) expert skills such as engineering, medicine, or finance are acquired. It might seem to make sense then that the earlier skills are earlier and thus more amenable to computerized duplication than are the later, more specialized ones. For this reason, much of the initial Al work was concentrated in those early areas. But it turns out that this naive assumption is not right. Although expert skills require knowledge that many of us do not have, they often require much less knowledge than do the more mundane skills and that knowledge is usually easier to represent and deal with inside programs.

As a result, the problem areas where Al is now flourishing most as a practical discipline (as opposed to a purely research one) are primarily the domains that require only specialized expertise without the assistance of commonsense knowledge. There are now thousands of programs called expert systems in day-to-day operation throughout all areas of industry and government. Each of these systems attempts to solve part, or perhaps all, of a practical, significant problem that previously required scarce human expertise.




1.2.4 Al Technique


Artificial intelligence problems span a very broad spectrum. They appear to have very little in common except that they are hard. One of the few hard and fast results to come out of the first three decades of Al research is that intelligence requires knowledge. To compensate for its one overpowering asset, indispensability, knowledge possesses some less desirable properties, including:

Ø  It is voluminous.

Ø  It is hard to characterize accurately.

Ø  It is constantly changing.

Ø  It differs from data by being organized in a way that corresponds to the ways it will be used.

Al technique is a method that exploits knowledge that should be represented in such a way that:

Ø  The knowledge captures generalizations. In other words, it is not necessary to represent separately each individual situation. Instead, situations that share important properties are grouped together. If knowledge does not have this property, inordinate amounts of memory and updating will be required. So we usually call something without this property “data” rather than knowledge.

Ø  It can be understood by people who must provide it. Although for many programs, the bulk of the data can be acquired automatically (for example, by taking readings from a variety of instruments), in many Al domains, most of the knowledge a program has must ultimately be provided by people in terms they understand.

Ø  It can easily be modified to correct errors and to reflect changes in the world and in our world view.        

Ø  It can be used in a great many situations even if it is not totally accurate or complete.

Ø  It can be used to help overcome its own sheer bulk by helping to narrow the range of possibilities that must usually be considered.

Although AI techniques must be designed in keeping with these constraints imposed by AI problems, there is some degree of independence between problems and problem-solving techniques. It is possible to solve Al problems without using Al techniques

Although, as we suggested above, those solutions are not likely to be very good). And itis possible to apply Al techniques to the solution of non-Al problems. This is likely to be a good thing to do for problems that possess many of the same characteristics as do Al problems. In order to try to characterize Al techniques in as problem-independent a way as possible, let’s look at two very different problems and a series of approaches for solving each of them.

 

Tic-Tac.Toe


In this section, we present a series of three programs to play tic-tac-toe. The programs in this series increase in:

Ø  Their complexity

Ø  Their use of generalizations

Ø  The clarity of their knowledge

Ø  The extensibility of their approach

Thus they move toward being representations of what we call At techniques.

 



Board              A nine-element vector representing the board, where the elements of the vector correspond to the board positions as follows:

   1
   2
   3
   4
   5
   6
   7
   8
   9

An element contains the value 0 if the corresponding square is blank,
1 if it is filled with an X, or 2 if it is filled with an 0.

Movetable       A large vector of 19,683 elements (39), each element of which is a nine-element vector. The contents of this vector are chosen specifically to allow the algorithm to work.
The Algorithm

To make a move, do the following:

  1. View the vector Board as a ternary (base three) number. Convert it to a decimal number.

  1. Use the number computed in step 1 as an index into Movetable and access the vector stored there.

  1. The vector selected in step 2 represents the way the board will look after the move that should be made. So set Board equal to that vector.

Comments

This program is very efficient in terms of time. And, in theory, it could play an optimal game of tic-tac-toe. But it has several disadvantages:
                 
Ø  It takes a lot of space to store the table that specifies the correct move to make from each board position,

Ø  Someone will have to do a lot of work specifying all the entries in the movetable.

Ø  It is very unlikely that all the required movetable entries can be determined and entered without any errors.

Ø  If we want to extend the game, say to three dimensions, we would have to start from scratch, and in fact this technique would no longer work at all, since 327 board positions would have to be stored, thus overwhelming present computer memones.

The technique embodied in this program does not appear to meet any of our requirements for a good Al technique. Let’s see if we can do better.



Board              A nine-element vector representing the board, as described for Pro­gram 1. But instead of using the numbers 0, 1, or 2 in each element, we store 2 (indicating blank), 3 (indicating X), or 5 (indicating 0).

Turn                 An integer indicating which move of the game is about to be played; I indicates the first move, 9 the last.


The Algorithm

The main algorithm uses three subprocedures:

Make2             Returns 5 if the center square of the board is blank, that is, if Board[5] = 2. Otherwise, this function returns any blank noncorner square (2,4, 6, or 8).

Posswin(p)      Returns 0 if player p cannot win on his next move; otherwise, it returns the number of the square that constitutes a winning move. This function will enable the program both to win and to block the opponent’s win. Posswin operates by checking, one at a time, each of the rows, columns, and diagonals. Because of the way values are numbered, it can test an entire row (column or diagonal) to see if it is a possible win by multiplying the values of its squares together, If the product is 18 (3 x 3 x 2), then X can win. If the product is 50 (5 x 5 x 2), then 0 can win. If we find a winning row, we determine which element is blank, and return the number of that square.

Go(n)               Makes a move in square n. This procedure sets Board[n] to 3 if Turn is odd, or 5 if Turn is even. It also increments Turn by one.

The algorithm has a built-in strategy for each move it may have to make. It makes the odd-numbered moves if it is playing X, the even-numbered moves if it is playing 0. The strategy for each turn is as follows:

Turn=l             Go(1) (upper left corner).

Turn=2 If Board[5] is blank, Go(5), else Go(1).

Turn=3 If Board[9] is blank, Go(9), else Go(3).

Turn=4            If Posswin(X) is not 0, then Go(Posswin(X)) [i.e., block opponent’s win], else Go(Make2).

Turn=5            If Posswin(X) is not 0 then Go(Posswin(X)) [i.e., winj else if Pos­swin(O) is not 0, then Go(Posswin(O)) [i.e., block win], else if Board[7] is blank, then Go(7), else Go(3). [Here the program is trying to make a fork.]

Turn=6            If Posswin(O) is not 0 then Go(Posswin(O)), else if Posswin(X) is not 0, then Go(Posswin(X)), else Go(Make2).

Turn=7            If Posswin(X) is not 0 then Go(Posswin(X)), else if Posswin(O) is not 0, then Go(Posswin(O)), else go anywhere that is blank.

Turn=8            If Posswin(O) is not 0 then Go(Posswin(O)), else if Posswin(X) is not 0, then Go(Posswin(X)), else go anywhere that is blank.

Turn=9 Same as Turn=7.



This program is not quite as efficient in terms of time as the first one since it has to check several conditions before making each move. But it is a lot more efficient in terms’ of space. It is also a lot easier to understand the program’s strategy or to change the strategy if desired. But the total strategy has still been figured out in advance by the programmer. Any bugs in the programmer’s tic-tac-toe playing skill will show up in the program’s play. And we still cannot generalize any of the program’s knowledge to a different domain, such as three-dimensional tic-tac-toe.



This program is identical to Program 2 except for one change in the representation of the board. We again represent the board as a nine-element vector, .but this time we assign board positions to vector elements as follows:

   8
   3
   4
   1
   5
   9
   6
   7
   2

Notice that this numbering of the board produces a magic square: all the rows, columns, and diagonals sum to 15. This means that we can simplify the process of checking for a possible win. In addition to marking the board as moves are made, we keep a list, for each player, of the squares in which he or she has played. To check for a possible win for one player, we consider each pair of squares owned by that player and compute the difference between 15 and the sum of the two squares. If this difference is not positive or if it is greater than 9, then the original two squares were not collinear and so can be ignored. Otherwise, if the square representing the difference is blank, a move there will produce a win. Since no player can have more than four squares at a time, there will be many fewer squares examined using this scheme than there were using the more straightforward approach of Program 2. This shows how the choice of representation can have a major impact on the efficiency of a prob1em-solving program.

Comments


This comparison raises an interesting question about the relationship between the way people solve problems and the way computers do. Why do people find the row-scan approach easier while the number-counting approach is more efficient for a computer? We do not know enough about how people work to answer that question completely. One part of the answer is that people are parallel processors and can look at several parts of the board at once, whereas the conventional computer must look at the squares one at a time. Sometimes an investigation of how people solve problems sheds great light on how computers should do so. At other times, the differences in the hardware of the two seem so great that different strategies seem best. As we learn more about problem solving both by people and by machines, we may know better whether the same representations and algorithms are best for both people and machines


Program 3

Data Structures

BoardPosition             A structure containing a nine-element vector representing the board, a list of board positions that could result from the next move, and a number representing an estimate of how likely the board position is to lead to an ultimate win for the player to move.

The Algorithm
To decide on the next move, look ahead at the board positions that result from each possible move. Decide which position is best (as described below), make the move that leads to that position, and assign the rating. of that best move to the current position.

To decide which of a set of board positions is best, do the following for each of them:

  1. See if it is a win. If so, call it the best by giving it the highest possible rating.

  1. Otherwise, consider all the moves the opponent could make next. See which of them is worst for us (by recursively calling this procedure). Assume the opponent will make that move. Whatever rating that moves has, assign it to the node we are considering.

  1. The best node is then the one with the highest rating.

This algorithm will look ahead at various sequences of moves in order to find a sequence that leads to a win. It attempts to maximize the likelihood of winning, while assuming that the opponent will try to minimize that likelihood. This algorithm is called the minimax procedure.

 

Comments


This program will require much more time than either of the others since it must search a tree representing all possible move sequences before making each move. But it is superior to the other programs in one very big way: It could be extended to handle games more complicated than tic-tac-toe, for which the exhaustive enumeration approach of the other programs would completely fall apart. It can also be augmented by a variety of specific kinds of knowledge about games and how to play them. For example, instead of considering all possible next moves, it might consider only a subset of them that are determined, by some simple algorithm, to be reasonable. And, instead of following each series of moves until one player wins, it could search for a limited time and evaluate the merit of each resulting board position using some static function.

Program 3 is an example of the use of an Al technique. For very small problems. it is less efficient than a variety of more direct methods. However, it can be used in situations where those methods would fail.


1.2.5 Introduction to Lisp and Prolog


Writing parallel programs is a difficult task for humans, and there is some hope that parallel implementations of these languages (perhaps augmented with parallel program­ming constructs) will make effective speedups more practical. Parallel LISP models include Multilisp, QLISP, and the Par­alation Model. Parallel PROLOG models include Concurrent PROLOG, PARLOG, and Guarded Horn Clauses.

Research into parallel logic programming languages was an important focus of the Japanese Fifth Generation project. Languages like PROLOG immediately suggest two types of parallelism. In OR-parallelism, multiple paths to the same goal are taken in parallel. For example, suppose we have the following clauses:

                        uncle (X,Y) :-    mother (Z,Y) ,  sibling(X,Z).
                        uncle (X,Y) :-    father  (Z,Y)  ,  sibling(X,Z).

Then the query

?- uncle(John, Bill)

could be satisfied in two different ways since John could be the sibling of Bill’s mother or of Bill’s father. A sequential implementation would try to satisfy the first condition, and then, if that failed, try the second condition. There is no reason, however, why these two paths could not be pursued in parallel.

In AND-parallelism, the portions of a conjunctive goai are pursued in parallel. Consider the clause:

            iritieldfly (X)  :-    fly(X),  infieldcatchahle (X),
                                          occupiedbase (first), outs (zero).

Here, the four conditions can be checked in parallel, possibly leading to a four-fold speedup in processing infieldtly queries. Such AND-parallelism is not so straight­forward when variables are shared across goals, as in:

uncle (X,Y)  :-  mother (Z,Y),  sibling (X,Z).

The mother (Z, 1) and sibling (X, Z) conditions cannot be satisfied indepen­dently, since they must instantiate the variable Z in the same manner.

Research on parallel logic programming shares the same goal as that on parallel production systems: to permit the efficient execution of high-level, easily written code for AI systems.

Parallelizing AT Algorithms


Some prob1ems are more amenable to parallel solutions than others. While nine authors may be able to write a book much faster than one author (if they each write separate chapters), nine women cannot bear a child any faster than one can. Likewise, throwing more processors at an AL problem may not bring the desired benefits. One example of an inherently sequential problem in AI is unification. While multiple processors can help somewhat , formal arguments show that vast speedups in the unification of large terms are not possible.

Many problems can be solved efficiently by parallel methods, but it is not always a simple matter to convert a sequential algorithm into an efficient parallel one. Some Al algorithms whose parallel aspects have been studied are best-first search [Kumar et al., l988], alpha-beta pruning, constraint satisfaction, natural language parsing, resolution theorem proving and property inheritance.





1.2.6 Logic Programming


Logic programming is a programming language paradigm in which logical assertions are viewed as programs. There are several logic programming systems in use today, the most popular of which is PROLOG. A PROLOG program is described as a series of logical assertions, each of which is a Horn clause. A Horn clause is a clause that has at most one positive literal. Thus p, ¬ pÚ q, and  p ® q are all Horn clauses. The last of these does not look like a clause and it appears to have two positive literals. Any logical expression can be converted to clause form. If we do that for this example, the resulting clause is ¬ p Ú q, which is a well-formed Horn clause. As we will see below, when Horn clauses are written in PROLOG programs, they actually look more like the form we started with (an implication with at most one literal on the right of the implication sign) than the clause form we just produced. Some examples of PROLOG Horn clauses appear below.

 ∀ x  :   pet(x) small(x) ® apartmentpet (x)
 ∀ x  :   cat(x) Ú dog(x) ® pet(x)
 ∀ x  :   poodle(x) ® dog(x)   small(x)
 poodle(fluffy)

A Representation in Logic

apartmentpet(X)  :-  pet(X), small(X).
pet(X)  :-  cat(X).
pet(X)  :-  dog(X).
dog(X) :-  poodle(X).
small(X) :- poodle(X).
poodle(fluffy).

A Representation in PROLOG

Figure 1.2: A Declarative and a Procedural Representation

The fact that PROLOG programs are composed only of Horn clauses and not of arbitrary logical expressions has two important consequences. The first is that because of the uniform representation a simple and efficient interpreter can be written. The second consequence is even more important. The logic of Horn clause systems is decidable (unlike that of full first-order predicate logic).

The control structure that is imposed on a PROLOG program by the PROLOG interpreter. The input to a program is a goal to be proved. Backward reasoning is applied to try to prove the goal given the assertions in the program. The program is read top to bottom, left to tight and search is performed depth-first with backtracking.

Figure 1.2 shows an example of a simple knowledge base represented in standard logical notation and then in PROLOG. Both of these representations contain two types of statements, facts, which contain only constants (i.e., no variables) and rules, which do contain variables. Facts represent statements about specific objects. Rules represent statements about classes of objects.

No comments:

Post a Comment