The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement no. 231868.
From chatbots and ECA on the web through synthetic characters in computer games and virtual assistants on the desktop to computers that answer phones, the primary way to create an operational conversational system is for some one to use introspection over log files to decide what he or she would say, and thus what the machine should have said. These systems are far from perfect in an interesting way: they are rarely simply ineffective, they are usually down right annoying [de Angeli 2005]. Why is that? What is it that we are missing about conversational agents and is there a better way to move from raw data of a form we can collect to the design of better conversational interfaces?
Computer scientists have of course been interested in computers and language from the start with some early successes. When the research community has looked at dialog systems in the wild however, results have been disappointing [Walker et al 2002] [Wallis 2008]. Indeed much of the work in the area is aimed, not at the dialog problem itself, but at ways of having Machine Learning solve the problem for us by using, for example, POMDPs [Young 2007]. The data sparsity issue however means that in practice these techniques are run over annotated training and test data. These annotation schemes abstract away from the raw data and arguably represent the theoretical content of field [Hovy 2010]. This is discussed further below. Computer science, looking to the dialog problem itself, generally views language as a means of conveying information, but from Heidegger and Wittgenstein on, we know there is more to language-in-use than informing. The idea of language as action [Searle 1969] highlights its social aspects. If language is to humans what preening is to monkeys [Dunbar 1996], the very act of talking to someone is making and maintaining social relations. Making systems polite is not simply a matter of saying please and thank you, it involves knowing when something can be said, how to say it, and if necessary how to mitigate the damage. What is more, the effort put into the mitigation of an face threatening act (FTA) is part of the message [Brown and Levinson 1987]. To a greater or lesser extent, conversational agents simulate human behaviour as a social animal and, rather than viewing dialog as primarily information state update [Traum et al 1999] or as a conduit for information [Reddy 1993], we view conversational systems as interactive artifacts in our space.
The ostensive purpose of the SERA rabbits was to encourage exercise
among the over 50s. The setup (see figure) could detect when
someone was there using its motion detector (PIR) and was given
an `exercise plan' that the subject
was expecting to follow. If the subject picked up the house-keys in a
period when she was expected to go swimming, the rabbit would say
``Are you going swimming? Have a good time.'' When she returned it
would say ``Did you have a good time swimming?'' If the subject
responded, it would ask if she had stuck to the amount of exercise she
had planned and record the amount actually done in a diary.
The figure shows the `house keys' being returned to the hook
sensor, triggering the `welcome home' script.
The dialog manager for the SERA system was a classic state-based system with simple pattern-action rules to determine what to say next, what actions to do and possibly to transition to a new state. We used best practice to develop the scripting and employed a talented individual with no training to ``use introspection over log files to decide what he or she would say, and thus what the machine should have said.'' What is sometimes missed is the effort that goes into scripting even simple dialogs. Viewing a beginning to end conversation as a decision tree covering the user's choices, having a two way choice at the root of the tree rather than no choice doubles the amount of effort required to construct the scripts. Adding options at the leaves of the tree is of course far easier but even then leaving options open tends to percolate back requiring other changes that have exponential cost. The grand challenge, in a technical sense, is to find ways to reuse script from existing parts of this decision space and approaches range from no re-use (the flat structure typified by chat-bots) through state-based systems where a state (attempts to) capture a context free `chunk' that can be safely used anywhere (the SERA approach) to full-blown dialog planning systems such as TRAINS Allen et al 1995]. But that is another story.
Things went wrong as we thought they would (discussed below) but we also faced a major technical challenge with speech recognition in a domestic setting. For iteration 1 we had simple yes/no buttons but for the second and third iterations we used `flash cards' that the rabbit could read in order to communicate. The problems with speech recognition and our attempted solutions are detailed else where [Wallis 2011]. Despite this admittedly major technical difficulty, all subjects talked to their rabbit some of the time -- some much more than others -- and all expressed emotion while interacting with it. We installed this set-up in the homes of 6 subjects for 10 days at a time, over three iterations, and collected just over 300 videoed interactions.
Having collected the data, the challenge then was to translate the raw video data into information about the design of better conversational agents. In general the project partners could not reach consensus on how to look at the data -- although it is telling that we all had an opinion on how to improve the system. There were plenty of interesting things to look at in the video data and interviews and, having identified something interesting, we could design a quantitative experiment to collect evidence. Producing papers from the data is not a problem. What was missing was a means of getting the big picture - a means of deciding what really matters to the user. It is one thing to say that a particular conversational system needs to be more ``human-like,'' but some faults are insignificant, others are noticed but ignored, but another set of faults drive users to despair [de Angeli 2005]. Unless we can build the perfect human-like system, distinguishing between the severity of the faults is key to designing interactive artifacts based on human behaviour.
As an example of the challenge of moving from data to design, in iteration 1 our talented script writer had the rabbit say `that's good' whenever the subject had done more exercise than planned. This was greeted in the video recordings with ``eye-rolling'' which, as hopefully the rest of the paper will convince you, the significance of which is obvious. Our talented expert then attempted to make the rabbit less patronizing but she couldn't see how to do that. This makes sense given the observation by de Angeli et al [de Angeli et al 2001] that machines have very low social status. In the second iteration the rabbit scripts were changed to remove any notion of the rabbit being judgmental -- all assessment was ascribed to some other person or institution such as the research team or the National Health Service. So the question remains, is it possible to create a persuasive machine? For iteration 3, the project consortium decided that our talented expert should try harder. Like so many things about language in use, being more persuasive looks easy, but turning that into instructions for a plastic rabbit might (or might not) be impossible. A methodology for going from video data to a new design would certainly help clarify the issues involved.
Human Computer Interaction is of course a well established field with approaches ranging from the strict reductionism often seen in psychology, to the qualitative methods of the social sciences. When it comes to conversational agents, these approaches all have their uses [Wallis et al 2001], [Wallis 2008],[Payr & Wallis] but one cannot help but feel that the real issues are often lost in the detail. In an excellent book on interaction design, Sharp, Rogers and Preece provide a list of reasons why an interface might elicit negative emotional responses [Sharp et al 2007,p189]:
Applied psychology is again an established field with a range of techniques. Applied Cognitive Task Analysis [Militello &Hutton 1998] uses semi structured interview techniques to elicit knowledge from experts in a range of tasks including the writing of training manuals for fire-fighters and the modelling and simulation of figher pilots. As part of a large project to create an embodied conversational agent that acted as a virtual assistant, Wizard of Oz experiments were run on using the automated booking system scenario [Wallis et al 2001]. The wizard, KT, was then treated as the expert and interviewed to elicit her language skills. The interviews roughly followed the Critical Decision Method questions of O'Hare et al [O'Hare et al 1998] which are listed in the Apendix and discussed later in the paper. The conclusion at the time was that KT needed to know far more about politeness and power relations than she did about time or cars. The problem was that interviewing people about their everyday behaviour (as opposed to their expert behaviour) was difficult because, not only are things like politeness just common-sense to the subject, such things are perceived by the interviewee as just common-sense for the interviewer. The interviewee quickly becomes suspicious about why the interviewer is asking "dumb" questions. It seems a more fundamental way of looking at human action is required.
The classic debate in the pursuit of an objective science of human behaviour is between qualitative and quantitative methods. Those with a psychology background will tend to use quantitative methods and report results with statistical significance. The methods include structured interviews and questionnaires, press bars and eye trackers. Formal methodologies that use statistical evidence rely on having a prior hypothesis [Shavelson 1988] and the formation of hypotheses is left to researcher insight and what are often called "fishing trips" over existing data. A positive result from formal quantitative experiments is certainly convincing, but the costs make such an approach difficult to use outside the lab. Qualitative researchers argue that there is another way that is equally convincing and more suited to field work.
Another approach to studying human behaviour is interesting in that it does not rely on having no theory, but avoids the subjectivity of the scientist by using the theoretical framework of the subjects. With such "ethnomethods", the intention is to explain behaviour, not from an outsider's view, but from the perspective of the "community of practice". This is particularly relevant when the aim is to model a person participating in a community of language users. The observation is that the model needs to capture the reasoning of the subject. This reasoning can be wrong, but that is the reasoning that matters. A community of bees can be (quantitatively) shown to communicate with each other, but a model of a bee communicating needs to capture the available behaviours, actions and activities of the community of bees. Garfinkel's observation [Garfinkel 1967] (in different words) was that, as a bee, one would have direct access to the significance of communicative acts within that community. If a bee does something that is not recognisable, it was not a communicative act. If a bee fails to recognise a communicative act (that others would generally recognise) then that bee is not a member of the community of practice. That is, communication is defined in terms of a community of practice, and the notion of ``direct access'' to the significance of a communicative act is defined in terms of that. The same of course applies to humans. As a member of the community of practice one has direct access to the significance of an act, but as a scientist one ought to be objective. Studying human interaction as a scientist, I need to be careful about my theories about how things work. As a mostly successful human communicator, I do not need to justify my understanding of the communicative acts of other humans. The first challenge is to keep the two types of theory separate. My scientific theory is outside, hopefully objective, and independent of my ability to hold a conversation. My folk theory of what is going on in a conversation is critical to doing conversation. It is ``inside'' the process, and as long as it enables me to participate in communication based activities, the objectivity of the theory is immaterial.
The HCI community do emphasise the need for designers to understand users, but the dominant view has a strong tendency to be an ``outsider's view.'' Sharp et al for example provide a list of solidly academic cognitive models that are expected to shed light on how people will react to a given design, and that can be used to guide the design process. It is argued here that conversational interfaces ought to be designed with an insider's view of human agency. This perhaps explains why amateur developers are so good at scripting agents - there simply is no secret ingredient [Po]. One might have a micro level scientific understanding of human behaviour based on neurons, one might also have a meta level scientific understanding of human behaviour based on dialectics and control of the means of production, one might even have a ``meso'' level [Payr 2010] scientific understanding of human behaviour based on say production systems [Rosenbloom, Laird & Newell 1993], but what we need to capture is the ``folk'' theory at the meso level.
The proposal that we are pursuing at Sheffield is that, in order to simulate human conversational behaviour, we need to capture a suitable folk understanding of events, and that understanding looks much like the essence of a play or novel. With such a model, we will be able to identify the essential as opposed to the incidental features for language based interactive artifacts.
An examination of the computer interface from a Vygotskian perspective has been done before [Laurel 1993] but his legacy in HCI comes primarily through Leontiev and the idea of mediated action [Wertsch 1997]. Human action is mediated by artifacts that have a highly socialized relevance to us - spoons are used in a particular way and multiplication tables are a conceptual tool that can be used to multiply large numbers. What roles can a computer play in socialized mediated action? The artifacts HCI study are props in scenes performed by actors with roles and Action Theory as it is known is an acknowledged part of the HCI repertoire [Sharp et al 2007]. This perspective however does not acknowledge the distinction between an inside and outside view and critics can, quite rightly, question the objectivity of such an approach. Modelling human conversational behaviour on such explicitly ``folk'' understandings is a different matter. The folk reasoning of novels is fabulously about the inner workings of a human mind and definitively subjective in nature. The fact that novels exist at all however suggests there is something shared.
Two narrators were asked to describe what happens in the recording. To set the scene and suggest a style of writing, they were given an opening paragraph:
"Peter and Mike have been talking in Peter's office where he has a robot rabbit that talks to you and that you can talk to using picture cards."
They were then asked to, independently, finish the story in around 200 words. These are the resulting stories:
Narrator 1 | Narrator 2 |
---|---|
It is time to go home so Peter takes his keys from the rabbit. Mike notices this and says "Isn't it supposed to say hello?" Peter is about to say something when the rabbit says: "Hello, are you going out?" Peter replies that he is (using the card and verbally) and the rabbit tells him to have a good time, bye. Mike picks up a card and shows it to the rabbit, but nothing happens. He thinks this make sense as the rabbit has said goodbye but Peter thinks it should work and shows the rabbit another card. Mike sees that he has been showing the cards to the wrong part of the rabbit and gives it another go. Still nothing happens and Mike tries to wake it up with an exaggerated "HELLO!". Peter stops packing his bag and pays attention. Mike tries getting the rabbits attention by waving his hand at it. Still nothing happens. Mike looks enquiringly at Peter as if to ask "what's happening" He says "that's a new one" and goes back to his packing. Mike takes his leave at this point. Peter finishes his packing, and, as he leaves says to the rabbit "You're looking quite broken." | Peter is about to do something to wake the rabbit up again and as he is about to speak, it says hello. Peter gestures to Mike that it is now talking as expected. Peter presses the video button to record the interaction. Mike laughs as it talks. It asks Peter if he is going out, to which he responds verbally that he is, showing the rabbit the card meaning yes. Seeing Peter's interaction, Mike tries using the cards to interact with the rabbit himself. It does not respond and Mike suggests that this is because it has said goodbye and finished the conversation. Peter tries to reawaken the rabbit with another card. Mike sees that he had put the card in the wrong place. He tries again with a card, after joking that the face card means "I am drunk". Peter laughs. When the rabbit does not respond, Mike says "hello" loudly up to the camera. Peter says he is not sure why there is no response while Mike tries to get a reaction moving his hand in front of the system. They wait to see if anything happens, Mike looking between the rabbit and Peter. When nothing happens, Peter changes topic and they both start to walk away. Mike leaves. As Peter collects some things together, walking past the rabbit, he looks at it. Before leaving the room he says to the rabbit "you're looking quite broken". |
1. | Peter is about to say something and is interrupted by the rabbit |
2. | the rabbit asks if he is going out, Peter's verbal and card response |
3. | the rabbit says bye |
4. | Mike's attempt to use a card and the non-response of the rabbit |
5. | Mike's explanation (that the rabbit has already said bye) |
6. | and Peter showing the rabbit another card |
7. | Mike sees that he has been showing the card to the wrong part of the rabbit and has another go |
8. | the rabbit does not respond |
9. | Mike says "Hello" loudly |
10. | Peter acknowledges it doesn't look right |
11. | Mike tries again by waving his hand in front of the rabbit |
12. | no response from the rabbit |
13. | Mike looks at Peter |
14. | They give up |
15. | Mike leaves |
16. | Peter leaves saying "You're looking quite broken" to the rabbit |
There are many differences, and many things were left out entirely. Neither narrator mentioned the rather interesting equipment in the background nor commented on the colour of clothes the participants were wearing. There is no comment on accents or word usage; no comment on grammatical structure or grounding or forward and backward looking function. Whatever it is that the narrators attend to, it is different to the type of thing looked at by those using the popular annotation schemes. It does however seem to be shared, and so the events in Table 2 can be identified as common to both descriptions.
Note that the observation of these events are not only shared by the narrators, they will also be "foregrounded" for the participants. That is Peter and Mike will, to a large extent, observe the same things happening and, what is more, each will assume that his conversational partner, to a large extent, observes the same things. The hypothesis is that that shared background information is the context against which the conversation's utterances are produced. This is not to say that folk theory is true theory, but if we want to simulate human reasoning and engineer better dialog systems then the simulation needs to use the same reasoning. The scientific challenge is to capture it, and to do that in a way that is convincing.
Abell's approach is to represent the world as being in a particular state, and human action moves the world to a new state. The human action is seen as partly intentional - that is, a human will act in a way to bring about preferred states of the world, based on beliefs - and partly normative - a human will do this time, what they did last time the same situation occurred.
Abell's format for a narrative [Abell 2010] consists of:
A second observation is that, although the notion of states and transitions is, in a formal sense, complete, it seems sequences of state/transitions are also first order objects and can form singular causes in an actor's reasoning. Mike's attempt to interact with the rabbit is motivated by his observation of the entire preceding interaction. Abell does talk at length about levels of description, but it seems our participants are switching levels as they go. The principle however remains sound and we need a way to present, formally, the notion of multiple level descriptions.
Abell provides a theoretical framework for a model of causality that we can use to account for the action in our video data. These accounts are purely descriptive but the observation is that they can be reused.
Plays and novels exist because they provide plausible accounts of human behaviour. The principle is that plays and theatre work because the characters in them behave in accordance with our expectations. As a human in a conversation, I have no idea of the underlying mechanism my conversational parter is using, but I do have a quite reliable folk psychological model and it is that we intend to capture.
The accounts of interest are of course not in the video data, but are produced by the narrator. In effect we would be accessing the narrator's head and the data is there to prompt the annotator to apply his or her knowledge to particular scenarios.
Accounts of the action in the video data as written down by the narrators are of course descriptive in that they are written to `fit' past events. The claim - yet to be verified - is that they are also predictive. If Mike wants to use the system, then it would not be surprising if other people also want to try it. If failure to work causes disappointment in Mike, it is likely to also cause it in others. Having a predictive model of events we are well on the way to having prescriptive rules that can be used to drive conversational behaviour.
What do these accounts look like? They are folk theory and as such will be in line with Dennett's notion of an ``intentional stance.'' In detail they will fit with the idea that people do what they believe is in their interests - a fact too trivial to state for a human, but machines need to be told. Using Dennett's example, Seeing two children tugging at a teddy bear, we folk know they both want it. In the video, Peter wants to show Mike how the system works; Mike believes the rabbit has finished talking. In implementation a fleshed out account will look much like plans, goals and cues in a Belief, Desire and Intention (BDI) agent architecture [Bratman et alRao & Georgeff]. The BDI architecture was explicitly developed as a model of human reasoning and attempts to balance reactive and deliberative behaviour. It was not intended as a dialog manager but has been used that way by several researchers including Ardissono and Boella [1998] Wallis et al [2001] and Kopp et al [2005]. The problem, as always, is how to get the required data to populate such models and the proposal here is to have our annotators provide the details for their own narrative. The aim is to have a methodology and tools to help.
Cognitive Task Analysis, and in particular the Critical Decision Method, have been designed explicitly to elicit folk explanations from experts.
|
Table 3 provides a preliminary draft for the set of instructions to be given to our annotators. The general approach has been to have the ``annotator'' produce a narrative description of the video in order to capture the essential, while leaving out the detail. The narrative is then formalized as described by Abell to provide events and the links between them. The next step is to have the annotator flesh out those links with questions like those by O'Hare et al (see the Appendix) to elicit the unstated and obvious. In particular, what are the goals of the characters in the narrative, what information does each character have that impinges on the action, and what choices were made.
Finally, having had the annotator work through their story in detail, he or she can be asked to expand on events in the video by exploring ``what if'' scenarios, producing script that agents might have said. Following the CDM approach, the conditions under which an agent might say one thing rather than another can be explored and documented, providing a future conversational agent with our annotators folk model of not only what to say but also when to say it. This is of course speculative at this stage and will be the subject of future work.
<?xml version="1.0" ?> <analysis> It is time to go home so Peter <event id="E0" agent="peter"> takes his keys from the rabbit. </event> Mike notices this and <event id="E1" agent="mike" cause="E0"> says "Isn't it supposed to say hello?" </event> Peter is about to <event id="E3" agent="peter" cause="E1"> say something </event> when the rabbit <event id="E2" agent="rabbit" cause="E0"> says: "Hello, are you going out?" </event> Peter <event id="E4" instrument="card and verbally" cause="E2"> replies that he is (using the card and verbally) </event> and the rabbit <event id="E5" agent="rabbit" cause="E4" recipient="peter"> tells him to have a good time, bye. </event> Mike <event id="E6" agent="mike" recipient="rabbit"> picks up a card and shows it to the rabbit, but nothing happens. </event> He <event id="E8" agent="mike" cause="E6"> thinks this make sense as the rabbit has said goodbye </event> but Peter thinks it should work and <event id="E9" agent="peter" cause "E6"> shows the rabbit another card. </event> |
Representing these events graphically, this figure provides the first half of that data graphically, putting in the agents on the vertical axis and time on the horizontal.
The engineering challenge is to use folk theory of the annotator to produce script. The next step is to explicitly state the cause of events that do not have a mentioned cause, and to flesh out the causal link for those relations in which $B$ does not necessarily follow from $A$. To explain $E0$ and $E6$ requires for instance the introduction of motivations on behalf of the characters. This figure provides these causes as boxed labels at the top:
This is perhaps a controversial move, but recall that first, as members of the community of practice, we have a strong shared understanding of such things -- it is not perfect, but not bad. If we did not understand the speaker's motivations, the text would not make sense. As a member of the community of practice, we may be wrong about another's intentions but as a participant in an interaction, any error can be safely ignored or it will get corrected. Second, and more objectively, we are engineering here and are only interested in a plausible explanation.
The dashed arrow and the open state represent an expectation, expressed in the original narrative as ``nothing happens.'' Formally in CA terms, this expectation is the second pair-part of an adjacency pair that did not occur. Once again, as members of the community of practice, we have a good understanding of the notion of the "normal response" to a first pair-part.
Given a state based description of the action, how might an annotator's workbench elicit data suitable for our implementation? In broad terms the aim is to get actions (things to actually say) events (things heard that are significant to plan choice) and plans (tactics). Given the state description, a work bench might automatically ask questions of the narrator about phenomena in their diagram. For instance $E3$ does not lead to a new event, so what is the relationship between that and $E4$? Is it part of the same plan being used by Peter? If not, did Peter's initial plan fail? In this case, no, he re planned in the light of a new event.
Having captured the causal links in the narrative, the annotator can now go on to explore the counter-factuals, and this process can once again be in part automated using the thematic roles and CDM style questions. For instance event $E9$ is ``[peter] shows the rabbit another card.'' A variant on the question in the Appendix would be ``is there something else that peter may have done to achieve ``demonstrate the rabbit''?
The aim is a methodology built around the notion of narratives as causal links between events. The hope is that such a methodology will take us one step closer to being able to engineer conversational agents by simply ``cranking the handle'' without need for insight as insight has, so far, failed to produce the goods.
Abell, P. (2010). A case for cases: Comparative narratives in sociological explanation (paper)
de Angeli, A. (2005). 'Stupid computer! abuse and social identity' In Angeli, A.~D., Brahnam, S., and Wallis, P., editors, Abuse: the darker side of Human-Computer Interaction (INTERACT '05) Rome. (paper)
de Angeli, A., Johnson, G.I., and Coventry, L. (2001). 'The unfriendly user: exploring social reactions to chatterbots' In Helander, K. and Tham, editors, Proceedings of The International Conference on Affective Human Factors Design, London. Asean Academic Press ( paper)
Dennett, D.C. (1987) The Intentional Stance The MIT Press
Garfinkel, H. (1967). Studies in Ethnomethodology, Prentice-Hall
Kopp, S., L. Gesellensetter, N. Kramer and I. Wachsmuth, (2005) 'A Conversational Agent as Museum Guide - Design and Evaluation of a Real-World Application', 5th International working conference on Intelligent Virtual Characters, note= http://iva05.unipi.gr/index.html
Laurel, B. (1993). Computers as Theatre. Addison-Wesley Professional
O'Hare, D., Wiggins, M., Williams, A., and Wong, W. (1998).
'Cognitive task analyses for decision centred design and training'
Ergonomics, 41(11):1698--1718.
Payr, S. (2010).
Personal communication.
'Peter and Mike have trouble with a rabbit'
http://staffwww.dcs.shef.ac.uk/ people/ P.Wallis/
PMRvideo.mov
Rao A., and M. Georgeff (1995)
BDI Agents: from Theory to Practice, AAII, TR-56,
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.7970
Sacks, H. (1992). Lectures on Conversation (edited by G. Jefferson) Blackwell, Oxford.
Searle, J.~R. (1969) Speech Acts, an essay in the philosophy of language, CUP
Violet. The nabaztag. http://www.nabaztag.com/en/index.html
Wertsch, J.~V. (1997) Mind as Action Oxford University Press
Goal specification | What were your specific goals at the various decision points? |
---|---|
Cue identification | What features were you looking at when you formulated your decision? |
Expectancy | Where you expecting to make this type of decision during the course fo the event? |
Describe how this affected your decision-making process | |
Conceptual model | Are there any situations in which your decision would have turned out differently? |
Describe the nature of these situations and the characteristics that would have changed the outcome of your decision. | |
Influence of uncertainty | At any stage, wee you uncertain about either the reliability or the relevanc eof the information that you had available? |
At any stage, were you uncertain about the appropriateness of the decision? | |
Information integration | What was the most important piece of information that you used to formulate the decision? |
Situation awareness | What information did you have available to you at the time of the decision? |
What information did you have available to you when formulating the decison? | |
Situation assessment | Did you use all the information available to you when formulating the decsion? |
Was there any additional information that you might have used to assist in the formulation of the decision? | |
Options | Were there any other alternatives available to you other than the decision that you made? |
Why were these alternatives considered inappropriate? | |
Decision blocking - stress | Was there any stage during the decision-making process in which you found it difficult to process and integrate theinformation available? |
Describe precisely the nature of this situation. | |
Basis of choice | Do you think that you could develop a rule, based on your experience, which could assist another person to make the same decision successfully? |
Why/Why not? | |
Analogy/generalization | Were you at any time, reminded of previous experiences in which a similar decision was made? Were you at any time, reminded of previous experiences in which a different decision was made? |