4.1 Task analysis
Several of the methods which have been developed by knowledge engineers trying to elicit knowledge from human beings with the aim of building expert systems can be used to obtain concepts in any domain. These concepts often map onto objects. This is not the place for an exegesis on methods of knowledge acquisition, but we should mention the usefulness of methods based on Kelly grids (or repertory grids), protocol analysis, task analysis and interviewing theory. The use of the techniques of Kelly grids for object identification is explained later in this section. Protocol analysis (Ericsson and Simon, 1984) is in some ways similar to the procedure outlined earlier of analysing parts of speech, and task analysis can reveal both objects and their methods. Task analysis is often used in UI design (Daniels, 1986; Johnson, 1992).
Broadly, task analysis is a functional approach to knowledge elicitation which involves breaking down a problem into a hierarchy of tasks that must be performed. The objectives of task analysis in general can be outlined as the definition of:
The result is a task description which may be formalized in some way, such as by flowcharts, logic trees or even a formal grammar. The process does not, however, describe knowledge directly. That is, it does not attempt to capture the underlying knowledge structure but tries to represent how the task is performed and what is needed to achieve its aim. Any conceptual or procedural knowledge and any objects which are obtained are only elicited incidentally.
In task analysis the objective constraints on problem solving are exploited, usually prior to a later protocol analysis stage. The method consists in arriving at a classification of the factors involved in problem solving and the identification of the atomic 'tasks' involved. The categories that apply to an individual task might include:
This implies that it is also necessary to identify the actions and types in a taxonomic manner. For example, if we were to embark on a study of poker playing we might start with the following crude structure:
Types: Card, Deck, Hand, Suit, Player, Table, Coin
Actions: Deal, Turn, See, Collect
One form of task analysis assumes that concepts are derivable from pairing actions with types; e.g. 'See player', 'Deal card'. Once the concepts can be identified it is necessary to identify plans or objectives (win game, make money) and strategies (bluff at random) and use this analysis to identify the knowledge required and used by matching object-action pairs to task descriptions occurring in task sequences. As mentioned before, this is important since objects are identified in relation to purposes.
As a means of breaking down the problem area into its constituent sub-problems, task analysis is useful in a similar way to data flow analysis or entity modelling. Although the method does incorporate the analysis of the objects associated with each task, it is lacking in graphical techniques for representation of these objects, and therefore remains mostly useful for functional elicitation.
The approach to cognitive task analysis recommended by Braune and Foshay (1983), based on human information processing theory, is less functional than the basic approach to task analysis as outlined above, concentrating on the analysis of concepts. The second stage of the three-step strategy is to define the relations between concepts by analysing examples, then to build on the resulting schema by analysing larger problem sets. The schema that results from the above analysis is a model of the knowledge structure of an expert, similar to that achieved by the concept sorting methods associated with Kelly grids, describing the 'chunking' of knowledge by the expert. This chunking is controlled by the idea of expectancy according to the theory of human information processing; i.e. the selection of the correct stimuli for solving the problem, and the knowledge of how to deal with these stimuli. As pointed out by Swaffield (1990), this approach is akin to the ideas of object modelling due to the concentration on the analysis of concepts and relations before further analysis of functions/tasks.
A task is a particular instance of a procedure that achieves a goal. There can be many tasks that achieve the same goal. Use cases are examples of tasks; they should always state their goal. We hope to be able to extract eventually a business object model from the tasks we have discovered.
In applications where the functions are more immediately apparent to consciousness than the objects and concepts, task analysis is a useful way of bootstrapping an object-oriented analysis. This is often true in tasks where there is a great deal of unarticulated, latent or compiled knowledge. Task scripts can be deepened into task analysis tree structures where this is helpful.
Task analysis will not help with the incorporation of the many psychological factors which are always present in deal capture or similar processes, and which are often quite immeasurable. Other incommensurables might include the effects of such environmental factors as ambient noise and heat, and the general level of distracting stimuli.
In some ways it could be held that the use of a formal technique such as task analysis in the above example can add nothing that common sense could not have derived. However, its use in structuring the information derived from interviews is invaluable for the following reasons. Firstly, the decomposition of complex tasks into more primitive or unitary actions enables one to arrive at a better understanding of the interface between the tasks and the available implementation technology, as will be seen in the above analysis. This leads to a far better understanding of the possibilities for empirical measurement of the quality of the interface. The second factor is that the very process of constructing and critiquing the task hierarchy diagrams helps to uncover gaps in the analysis, and thus remove any contradictions.
Task analysis is primarily useful in method identification rather than for finding objects, although objects are elicited incidentally. We now turn to methods borrowed from knowledge engineering which address the object identification problem more directly.
Basden (1990 and 1990a) suggests, again in the context of knowledge acquisition for expert systems, a method which may be of considerable use in identifying objects and their attributes and methods. He offers the example of a knowledge engineer seeking for high level rules of thumb based on experience (heuristics). Suppose, in the domain of Gardening, that we have discovered that regular mowing produces good lawns. The knowledge engineer should not be satisfied with this because it does not show the boundaries of the intended system's competence - we do not want a system that gives confident advice in areas where it is incompetent. We need to go deeper into the understanding. Thus, the next question asked of the expert might be of the form: 'why?'. The answer might be: 'Because regular mowing reduces coarse grasses and encourages springy turf'. What we have obtained here are two attributes of the object 'good turf' - whose parent in a hierarchy is 'turf', of course. Why does regular mowing lead to springy turf? Well, it helps to promote leaf branching. Now we are beginning to elicit methods as we approach causal knowledge. To help define the boundaries, Basden suggests asking 'what else' and 'what about ...' questions. In the example we have given the knowledge engineer should ask: 'what about drought conditions?' or 'what else gives good lawns?'. These questioning techniques are immensely useful for analysts using an object-oriented approach.