-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
executable file
·234 lines (209 loc) · 40.1 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Intention Recognition - Techniques</title>
<link rel="stylesheet" type="text/css" href="main.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script type="text/javascript" src="main.js"></script>
</head>
<body>
<div id="header">
<h1>Intention Recognition</h1> <h2>Techniques</h2>
</div>
<div id="navigator">
<div id="links">
<a href="#introduction">Introduction</a>
<a href="#probabilistic">Probabilistic</a>
<a href="#casebased">Case Based</a>
<a href="#logic">Logic</a>
</div>
</div>
<div id="sub-navigator">
<div id="sub-links">
</div>
</div>
<div id="content">
<!-- INTRODUCTION PAGE -->
<div class="page" id="page-introduction">
<div class="box" title="Introduction">
<p>Intention recognition is the idea of analyzing the actions of an agent to predict what that agent's goal is. It can be described as the opposite of planning, where a goal is known and actions are planned to complete this goal. In intention recognition, the actions are known, and we must try to guess the goal of the agent.<sup id="i1"></sup></p>
</div>
<div class="box" title="Early Work">
<p>The topic was first researched in the 1980s, where it was originally intended to help with automatic response generation, for example in computer help messages.<sup id="i2"></sup> The field today is must more widespread, there is research being done in creating ambient intelligence, sophisticated computer games and in military applications such as terrorism detection.</p>
</div>
<div class="box" title="Overview">
<p>There are usually three components to consider when creating an intention recognition system. Firstly a set of end goals, or intentions that can be reached. Secondly some type of knowledge about how these goals can be reached and lastly the sequence of actions performed by the agent. Combining these three ideas it would be possible to recognise the goal of an agent.<sup id="i1"></sup></p>
<p>There are three main formalisms of intention recognition that we will explore further in this topic. You can read more about the logic-based, cased-based and probabilistic approaches in their respective pages.</p>
</div>
<div class="box" title="Example">
<img src="airport.png" width="500px" height="300px" align="right">
<p align="left">In this website we demonstrate the different techniques with a real world example.</p>
<p>Take a man in his 40s, acting nervously in an international airport. He has not checked in and is waiting in a crowded area with a single bag. He is sitting in the lower seating area, then is seen at the blue X rushing to the bathroom and leaving his bag at the position of the red X.</p>
<p>In this website we will explore different ways that can we design a system to recognise and appropriately respond to behaviours such as this.</p>
</div>
<div class="box" title="Comparison">
<p>Although working towards the same goal, each of the three formalisms explained in this document use different methods and implementations to recognise an agents intention. It is easiest to compare the Logic-based and Probabilistic approaches as the underlying concepts are reasonably similar. They both have a set of programmed rules that the system will follow. The main difference is that the rules in logic are concrete, e.g. the man in the airport is walking towards the telephone implies that his goal is to make a phone call. Whereas in a probabilistic approach, the fact that he is walking towards the telephone could mean that he is going to make a phone call, but there is also a chance that he is going to the bathroom, or perhaps planning a terrorist attack. Each of these situations will have a probability associated with it, so the intention recognised by a probabilistic system is a "best guess". The logic system could sometimes fail when regonising behaviour of a human, as we don't always follow a concrete set of rules, which could confuse the system, but the probabilistic approach knows that there is a chance that the human agent could be trying to do something else, and takes this into account in its probabilities. </p>
<p>A Cased-based implementation would look at the situation slightly differently. It would search a database looking for accounts of people behaving similarly, and looked at what they were trying to achieve. Using this information it can recognise the intent of an agent based on what it has seen in the past. The system could know that the last time it saw a human rushing in that direction, they were rushing to make an important phone call, so make a prediction that this is what is happening now. The main problem with this is that in some situations where an event is particularly rare, a certain type of behaviour might not have been witnessed before. It is possible that the system has never seen a man behaving in this way, so would have no idea what his intentions are. The same could potentially be said for the other formalisms, if a specific situation has not been programmed into them, they could also fail to recognise that situation.</p>
</div>
<div class="box" title="References">
<ol>
<li>Sadri, F., ‘Intention Recognition in Agents for Ambient Intelligence: Logic-Based Approaches’, Bosse, T. (ed.) Agents and Ambient Intelligence, Ambient Intelligence and Smart Environments, vol. 12, pp. 197–236. IOS Press. (2012)</li>
<li>Wilensky, R. (1983). Planning and understanding. Addison Wesley, Reading, MA, 1983.</li>
</ol>
</div>
</div>
<!-- PROBABILISTIC PAGE -->
<div class="page" id="page-probabilistic">
<div class="box" title="Why Probabilistic Based Techniques Are Used">
<p>The motivation behind producing a probabilistic approach for dealing with intention recognition becomes apparent when tackling the appropriate applications. Particularly in highly uncertain situations - using noisy sensors (Charniak & Goldman 1993, p. 64)<sup id="p3"></sup> - perhaps where a fast and immediate response is necessary (Kasteren & Englebienne & Kröse 2010, Chapter 8)<sup id="p4"></sup>, other models are less viable due to their lack of nuance and efficiency.</p>
<p>As the name probabilistic implies, the aforementioned uncertain processes are consequently dealt with effectively, using probabilities. But what exactly substantiates an uncertain situation? Many factors have and can be posited. For instance, an agent (the subject for whom we are guessing the intent of) who is sentient, cannot have a fully accurate perception of their surroundings. Therefore, a rational, thinking agent cannot generally perform their own intentions in a simple if/then fashion as they themselves have to deal with their own intrinsic uncertainties, which are garnered from their incomplete knowledge of the environment (Tahboub 2005, p.4)<sup id="p1"></sup>.</p>
<p>Also, simple knowledge of rational agents suggests that individual responses to environments will vary drastically due to variations in differing agents which could add uncertainty into recognising the possible intentions. This is creates intention complexity (Heinze 2004, p.25)<sup id="p5"></sup>. Note that this factor is more likely to contribute significant amounts of uncertainty if the agent pool (the theoretical number of agents to be observed by the computerised recognition) is large and therefore a targeted approach - whereby specific agent/s characteristics are known and thus the agent/s possible reactions can be more accurately defined against the general population - becomes less practical. Furthermore, in a large agent pool, either more time and resourses are needed to map the agent/s characteristics<sup id="p5"></sup>, if known, to more personalised intentions; alternatively, an agent/s specific characteristics can simply be ignored, adding uncertainty, but saving potential efficiency and time in a model implementation. Although these factors and decisions may seem specific to a situation and hard to envision, they are vital in choosing how to curate models for implementations which can vary all the way from airport security, to smart homes and speech recognition.</p>
</div>
<div class="box" title="How And What Are These Techniques Implemented">
<p>One of most widely used and common model for interpreting probabilistic situations is the Hidden Markov Model (in which ‘hidden’ stands for the states unknown which for intention Recognition, is obviously the intentions). This model is a specialised Bayesian Network which is dynamic, meaning time is factor in the model and directly effects the current state in the model. For such a model you have three basic building blocks (Rabiner & Juang, January 1986, p. 6-7)<sup id="p2"></sup>.</p>
<ul>
<li>A finite number of states, say N. Dependant on the situation, states are fairly hard to come up with and must be limiting due to the infinite complexity of most the potential applications<sup id="p2"></sup>. The balance between complexity and efficiency can involve a great degree of trial and error<sup id="p2"></sup>. These states can be represented diagrammatically, or for a flexible computing implementation, a tree of linked lists with variable numbers of pointers with attached probability weighting (this is especially useful for initial experimentation in which trial and error like, heuristic thinking can alter the number of nodes needed). This can be implemented on most imperative programming languages.</li>
<li>There is a clock time, say t. On a new clock time, dependant on the previous states probability distribution, a new state is entered. Therefore the states must change for an increase in clock time meaning time is treated as a distinct quantity - indeed everything in this model is. For these times, a sequence is used to keep track of the current states and observations (mentioned next). For a computerised implementation, an iterative solution can be employed where, as an example, the language iterates based on a object with a time field (t1, t2 .. tX for each object one to X, time represented as an incrementing int), a state field (s1, s2 .. sX) and an observation (o1, o2 .. oX). Furthermore, the current state (state field) will for our applications of the HMM model, be ‘hidden’ and thus most likely null. Whilst implementing an algorithm to find a probable ending state, or intention, these fields will change from null to a probability distribution based object.</li>
<li>Finally, observation symbols can be produced based on the current states probability distribution for observation emissions, say M. For intention recognition purposes however, observations are found and then are used to approximate the appropriate state that led to them. Therefore, for the intention recognition use of HMM’s, each new distinct observation denotes a need for a new t (which will in turn increment) so for the iterator for generating a sequence, the probable implementation of hasNext() will be based on whether something new is observed about the agent. Observations will probably be implemented similar to state diagrams on a computer as described above on the same tree linked list system as the states.</li>
</ul>
<p>There is one additional component intrinsic to the model, the probabilities. In this model they are represented as arcs, which: from state to state, describe the probability at the next time t of the state changing to the state the arrowhead points at (it can point to itself); from state to observation, describing the probability at the next time t of the state exhibiting the observed at where the arrowhead points at. By basic mathematical probability theory, we can infer that all outgoing state to state arrows from a state must sum to probability 1. Similarly, all state to observed arrows from a state must also sum to probability 1. This whole system, before observations are made to start the time iteration, can be statically implemented on a computer system in a tree like structure as described previously.</p>
<p>Hopefully, by showing some potential core underpinnings to the implementation, it is not just clear what the primary concepts of HMM’s are, but also how it is a relatively easy task to abstract the process to a computerised system with a degree of flexibility. Still, a model is not useful unless it is used to make some predictions. After exploring the algorithms for this, I will then give an example to hopefully clarify the whole process further.</p>
</div>
<div class="box" title="Algorithms To Find Intentions">
<p>There are many algorithms commonly used to find various pieces of information about the changing sequence of states N, over time t, given an sequence of observations M. The main algorithms are, the forward algorithm (for a process called filtering), the forward-backward algorithm (forward can be combined with backward for a process called smoothing) and the Viterbi algorithm (used to find the likelihood of a state sequence). Filtering uses the forward algorithm to find a probability distribution of what state the sequence is currently at. Smoothing uses the same forward algorithm to find a probability distribution of what state the sequence was previously at. There is a subtle difference between filtering and smoothing as smoothing looks at what state the sequence was in at a past point in time t, not the most recent time - essentially meaning that information gained later in the sequence of observations can be used to give a more precise prediction of where the states preceding the extra observations were. This retrospective linking of observations is why the word ‘backward’ is used, to imply information gained from the future is used to trace back the steps that led there increasing the accuracy of a prediction compared to one without the extra observations. Finally, the Viterbi algorithm is a process that doesn’t only predict a state at a time t, but the whole sequence of states over a range of times.</p>
<p>Based on the context of Intention Recognition, the most appropriate algorithms to employ are generally the forward and forward-backward, algorithms. For most contexts, especially when constantly monitoring and updating the intentions of an agent, the sequence that led to a current state is insignificant compared to the probability of a state. We just want to know which process is most likely and respond to it if required, the transitions are just a way of getting to these states. For most uses on intelligent agents, every new observation would, after being added to the sequence, require a re-computation of the forward algorithm on this new state in time and if necessary a forward-backwards algorithm for finding potential intentions in important places earlier on with the extra information gained.</p>
<p>The forward algorithm is defined as follows (Wikipedia mathematics use)<sup id="p6"></sup>:</p>
<p>We are trying to find probability <span>p(x<sub>t</sub>, y<sub>1:t</sub>)</span>. <span>x<sub>t</sub></span> is the probability of state x at time t given all observations y up to time t (1 to t). If we want to calculate the entire probability distribution, we would do this calculation over all states X. The summation of said distribution should then, if the algorithm is performed correctly sum to 1. To efficiently compute the probability above, we make use of the rules for conditional independence in the HMM which are fairly simple due to only having dependence for directly preceding states. Using a combination of chain rules and model conditions this expression simplifies to:</p>
<span style="font-size:16px">α<sub>t</sub>(x<sub>t</sub>) = p(y<sub>t</sub>|x<sub>t</sub>)Σ<sub>x<sub>t-1</sub></sub>p(x<sub>t</sub>|x<sub>t-1</sub>)α<sub>t-1</sub>(x<sub>t-1</sub>)</span>
<p>Where <span>α<sub>t</sub>(x<sub>t</sub>)</span> is <span>p(x<sub>t</sub>, y<sub>1:t</sub>)</span>. The probability of <span>p(y<sub>t</sub>|x<sub>t</sub>)</span> is simply the state to the observed probability arc as described earlier. The probability of <span>p(x<sub>t</sub>|x<sub>t-1</sub>)</span> is simply the transition probability and we evidently recurse on <br><span>α<sub>t-1</sub>(x<sub>t-1</sub>)</span>.</p>
<p>To compute this algorithm efficiently without unnecessary recalculations, it makes sense to do this iteratively using the sequence of time and observed as stated in the implementation techniques heading above. Then, when we have a new observation, we don’t have to do a new calculation as we use the previous ones. In that case we start from a first, start state probability 1 that eventually feeds into the others. Hence <span>α<sub>t</sub>(x<sub>t</sub>)</span> uses the previous iteration <span>α<sub>t-1</sub>(x<sub>t-1</sub>)</span> and the transition and emission (state to observed) arcs <span>p(x<sub>t</sub>|x<sub>t-1</sub>)</span> and <span>p(y<sub>t</sub>|x<sub>t</sub>)</span> respectfully to get the next iteration of the algorithm.</p>
<p>The Backward Algorithm works in a similar way but essentially in reverse and merges both calculations to calculate a probability distribution in a past time t. It might be sensible to use the forward-backward algorithm if at some point the highest probability states were similar but after future observations it might clear up which was more likely by factoring these extra observations in - especially if one of these close, second or third highest probability intentions required immediate action from the program and were considered important intentions.</p>
</div>
<div class="box" title="How To Estimate The Probability Arcs">
<p>Creating the states for a particular HMM can require a great deal of trial and error as stated earlier (Rabiner & Juang, January 1986, p. 6-7)<sup id="p2"></sup>, but also hard is forming estimations for probability arcs. Helpfully however, there are algorithms to help estimate the probabilities between state to state transitions called parameter estimations. One widely used algorithm is the Baum-Welch. It iteratively updates the probabilities in a log-likeliness way but is in no way always optimal (Tu, p.1)<sup id="p7"></sup>.</p>
</div>
<div class="box" title="Response To Intentions">
<p>After knowing the likelihood of an agent having an intention using the forward or forward-backward algorithms described, the probability distribution can be further weighted in terms of how necessary it is to deal with the intention. For instance, if applied to airport security - if a man leaves his bag (observation) and then runs to the toilet (observation) and the resultant probabilities that the mans intention is to detonate a bomb in his bag, or he forgot his bag are equal, it makes sense to prepare for the worst and respond to a bomb threat as that is far more serious. Therefore the intentions should be weighted, in an airport safety manner, in terms of risk on a multiplication based scale. The highest number should then indicate what action is most required.</p>
</div>
<div class="box" title="Conclusion">
<p>To summarise. Probabilistic approaches to dealing with intention recognition are a viable and computationally achievable way to predict and react to an agent. Through establishing state diagrams and their observation probabilities, combined with algorithms to calculate probable intentions based on sensor data, it is very possible to respond to various situational environments with a high degree of information presented as probabilities.</p>
</div>
<div class="box" title="References">
<ol>
<li>KARIM A. TAHBOUB. 2005. Intelligent Human–Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition. Journal of Intelligent and Robotic Systems. Palestine Polytechnic University.</li>
<li>L. R. Rabiner & B. H. Juang. 1986. An Introduction to Hidden Markov Models. IEEE ASSP Magazine. Stanford.</li>
<li>E. Charniak and R.P. Goldman. 1993. A Bayesian model of plan recognition. Artificial Intelligence.</li>
<li>T.L.M. van Kasteren, G. Englebienne, and B.J.A. Kröse.Human Activity Recognition from Wireless Sensor Network Data: Benchmark and Software. Intelligent Systems Lab Amsterdam, Science Park 107, 1098 XG, Amsterdam, The Netherlands.</li>
<li>Clint Heinze. 2004. Modelling Intention Recognition for Intelligent Agent Systems. DSTO Systems Sciences Laboratory, Edinburgh, South Australia, Australia.</li>
<li><a href=" http://en.wikipedia.org/wiki/Forward_algorithm">http://en.wikipedia.org/wiki/Forward_algorithm</a></li>
<li>Stephen Tu. Unknown date. Derivation of Baum-Welch Algorithm for Hidden Markov Models. University of Berkeley.</li>
</ol>
</div>
</div>
<!-- CASE BASED PAGE -->
<div class="page" id="page-casebased">
<div class="box" title="Introduction">
<p>Case based reasoning (CBR) originates from research in cognitive science in the late 1970s and is mainly on based work and ideas from American professors Roger Schank and Janet Kolodner<sup id="c1"></sup> and it is one of the three big formalisms in the field of intention recognition. The simple idea behind CBR is that similar problems require similar solutions and that would also describe the core in its implementation – to look at how previous problems has been handled in order to solve new similar ones. As it is still quite a young technique, it has still to find more areas of usage. However, it is quite frequently used in healthcare, law, data mining and it has a great potential use in smart homes as they get more common and developed, which will be covered later on in this paper. There are also many different approaches to CBR, but here I will focus mainly on the pure case based one.</p>
</div>
<div class="box" title="What Is A Case?">
<p>In its most simple form, a case can be thought of as a box containing of two folders. The first folder holds a problem of some sort, e.g. “the patient has a fever, a sore throat and is coughing, what would an appropriate treatment be?”, “the suspect has committed assault, that would a reasonable punishment be?” or “meal containing of pasta, cream and bacon”. The second folder contains of a solution to the first folders problem, which in the previously mentioned examples could have been “antibiotics”, “6 months of jail time” and “carbonara”. Together, the problem and its solution, assuming that an answer has been generated or a solution has been applied, form a case which is then saved and stored for later use. When a case is formed, it can then be seen as a piece of knowledge worth reusing or at least to look at when new similar problems or tasks arrive.</p>
<p>A case can also vary widely in how large they are – a case can hold an evolving situation, which could be the case for a medical patient undergoing long time treatment, or just a short fragment like the ruling of a judge at a specific moment and anything in between the two extremes.(Pg. 9-10)<sup id="c2"></sup></p>
<p>When talking about cases, it is also important to take notice on what can be seen as a memorable case and what can be seen as a normal case; how they differ and how or if they should be treated different from each other. This is still up for debate as this problem can be seen as quite subjective, so I will give my personal take on it.</p>
<p>Let’s say that we wanted to know how to cook carbonara, a task that we have done many times before. A normal case would be where standard ingredients are used and no complications occur during the cook. A more memorable case could be where we added an extra ingredient, changed the brand of e.g. the pasta or left the bacon on the stove for too long resulting in it being burnt, making it a different meal with a different taste and/or nutrition. Storing every cook where everything went just as normal as separate cases would be a waste of memory, as they would not differ and each case after the first would not contribute anything new. Storing it once would be interesting though, as that should be seen as a separate and therefore memorable case. As for the memorable cases, I believe that they should all be stored and considered, as each of them differs. It might however not be interesting to remember the times when 100 grams of bacon was used and the time when 105 grams was, as they would differ too little to be out of real interest. When we are later going to cook a new batch of carbonara, we can then look at our previous cases, sort out the bad ones (which are still interesting to remember if we want to perhaps find out what the process of making a perfect carbonara is, where they would tell us what NOT to do) and then pick accordingly.</p>
<p>In summary, a case should be stored whenever its solution is different from the previous cases.</p>
</div>
<div class="box" title="The Case Library">
<p>Even though a case is great, it cannot do much on its own which is why we need a case library for CBR to properly work. Just knowing one sentence to a crime would not be that interesting, since it could have been too harsh, too mild or had special circumstances surrounding it. Knowing a few thousand of sentences to a similar crime however would be very interesting, as this would most likely cover quite a lot of different circumstances, motives and profiles, giving us a wider range to compare with, leading to a hopefully fair punishment when a new sentence is to be determined.</p>
<p>The library should be filled with relevant cases (how to make carbonara would not be of interest in a trial) and whenever a new problem receives a solution, that should be added as a case (following the guidelines for what can be seen as interesting) to the library in order to further widen its range.</p>
<p>We do also need to properly index our cases, to make sure that the reasoner in a reasonable amount of time can look through and select cases that are deemed to be relevant for the problem it is currently addressing.(Pg. 141-142)<sup id="c2"></sup> An example of this could be where a court is looking to sentence a person for assault and we have a library of previous assault trials. It might then only be interesting to look at assaults that are similar in nature and force and not at every single assault previously recorded. In order to be able to do this; we need to index the different assault trials in some way.</p>
</div>
<div class="box" title="The Process">
<p>Most of the process behind CBR has already been covered earlier in this paper, but I will give a brief run trough of the different steps involved. The cycle of CBR can in its highest and most general form be split up into four steps:<sup id="c6"></sup></p>
<ul>
<li>Retrieve the most similar cases</li>
<li>Reuse the information in the old cases to solve a new problem</li>
<li>Revise the proposed solution</li>
<li>Retain the parts of the new case that can be deemed to be interesting</li>
</ul>
</div>
<div class="box" title="Real World Application">
<p>Every example previous to this has been made up situations of how CBR could be used in real world scenarios, but might not necessarily display how CBR is actually used. They should be taken for what they are, examples generated with some form of back knowledge. There are however real usages of CBR in the areas mentioned above and one of the most important ones are definitely the area of medicine and healthcare.</p>
<p>The knowledge of experts in the field of medicine normally consists of two parts – textbook knowledge and experience, where the later can be thought of as a collection of cases.<sup id="c3"></sup> Whenever a doctor is going to make a decision about the treatment of a patient, they are more than likely going to take both parts into consideration, to give as good of a review of the current problem as possible.</p>
<p>If a medical expert has already done the reasoning for a difficult case, it would therefore make sense to store that information, in order to be able to reuse it later if a similar case would come up, as this would save both time and energy.</p>
<p>There are many systems in healthcare today which apply CBR (CASEY, MEDIC and BOLERO amongst many others) but I chose to focus on a system named FLORENCE, which is probably the most general one. FLORENCE deals with healthcare planning for nursing and fulfils three basic tasks – diagnosis, prognosis and prescription<sup id="c3"></sup>, and I am going to explain what these different tasks do and how they relate to CBR.</p>
<p>The way FLORENCE diagnoses should not be confused with the general term of diagnosis, which normally means to find out what the cause of a fault or disease is. Diagnosis seeks to answer the question: “What is the current health status of this patient?”</p>
<p>Diagnosis concerns weighted health indicators and the status of the patient is determined as a score of the indicator weights.<sup id="c3"></sup> What the weighting basically does is that it answers the question: "To what extent does indicator X predict the health status of sub concept Y?”(Pg. 367)<sup id="c4"></sup></sup></p>
<p>The diagnosis is a rule - rather than a case-based process and is based on the eleven health patterns of Gordon.<sup id="c5"></sup> The result from the diagnosis can however be stored in separate cases, as it generates numerical indicators of a patient’s health status in various areas, which then later on can be easily observed and compared.</p>
<p>The next step is the prognosis, which seeks to answer the question: “How may the health status of this patient change in the future?”</p>
<p>This would then be based on what the diagnosis has retrieved compared to previous patients’ diagnoses. We want to find out about future complications before they occur in order to prevent them, and this is done using a case based approach. The diagnosis from the patient is now compared to previous patients with similar issues or symptoms where the treatment has been a success. Doing this comparison should generate a good picture of how the patient’s health status is going to develop, leading us to our last and final step – prescription.(Pg. 370-371)<sup id="c4"></sup></p>
<p>Prescription seeks to answer the question: “How may the health status of this patient be improved?”</p>
<p>The prescription looks at the prognosis and seeks to find a treatment, depending on what we have previously found out. When going through the records of previous similar patients, FLORENCE should now be able to suggest a good treatment for the problem. This is done by a combination of the rule based and case based approaches, i.e. to look through the suggested treatments and pick the one that should apply the best to this new patient by utilising general knowledge of medicine.(Pg. 373-374)<sup id="c4"></sup></p>
<p>In summary, the process is very similar to the basic way that any case based system works:</p>
<ul>
<li>Generate a case using the diagnostic process.</li>
<li>Retrieve previous similar cases using the prognosis process.</li>
<li>Suggest appropriate treatment based on the previous similar cases using the prescription process.</li>
</ul>
</div>
<div class="box" title="References">
<ol>
<li><a href="http://www.dfki.de/web/research/km/expertise/research/case-based-reasoning">http://www.dfki.de/web/research/km/expertise/research/case-based-reasoning</a></li>
<li><a href="https://books.google.co.uk/books?hl=sv&lr=&id=3qyjBQAAQBAJ&oi=fnd&pg=PP1&dq=case+based+reasoning&ots=QNswYzVS7E&sig=GnjMKEJ6xDfnLGrysVZr-X6xzBw%22%20\\l%20%22v=onepage&q&f=false#v=onepage&q&f=false">Case-Based Reasoning - Janet Kolodner 2014</a></li>
<li><a href="http://www.sciencedirect.com/science/article/pii/S1386505601002210">Cased-Based Reasoning for Medical Knowledge-Based Systems - Rainer Schmidt</a></li>
<li><a href="http://link.springer.com/chapter/10.1007%2F3-540-58330-0_100#page-1">The application of case-based reasoning to the tasks of health care planning - Carol Bradburn, John Zeleznikow</li>
<li><a href="http://en.wikipedia.org/wiki/Gordon%27s_functional_health_patterns">Wikipedia - Functional Health Patterns</a></li>
<li><a href="http://www.idi.ntnu.no/emner/tdt4173/papers/Aamodt_1994_Case.pdf">Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches - Agnar Aamodt, Enric Plaza</a></li>
</ol>
</div>
</div>
<!-- LOGIC PAGE -->
<div class="page" id="page-logic">
<div class="box" title="Introduction">
<p>Intention recognition and planning has always been two heavily related concepts which is the ground for logic to take an important part in this field from the very early. In specific, intention recognition could be seen as the reversed process of planning. Planning focuses on finding some actions which could achieve a known goal. On the opposite, intention recognition try to work out what the goal might be basing on some possibly incomplete observed actions which is in general more challenging. Since logic has been considered to be the basis of many causal theories used for planning, it would undoubtedly to be widely used in intention recognition.<sup id="l1"></sup> Here I will introduce some basics of logic reasoning and explain the development of different logic approaches to intention recognition, finish by giving an example of its application in terrorist intention recognition.</p>
</div>
<div class="box" title="Logic Tools">
<h3>Abductive Reasoning</h3>
<p>For the simple, abductive reasoning can be understood as a process towards the best explanation. It produces hypothesises from observations that is often incomplete and try to find the most likely explanation. However, those explanation do not guarantee the conclusion, and further observations and tests can be used to obtain more accurate result.<sup id="l2"></sup></p>
<h3>Deductive Reasoning</h3>
<p>Different from the abductive reasoning, deductive reasoning is about prediction and can guarantee a true conclusion if the given assertions are all true. Starting from general rules, it narrows down towards the conclusion with valid observations until reaching the final conclusion.<sup id="l2"></sup></p>
<h3>Casual Theories</h3>
<p>Causal Theory is a common premise used both in planning and intention recognition, it assume that the observed agent is taking actions which would lead to his goal and the observer agent should share some knowledge of the causal theory, so that observed actions can be reasoned. Plan libraries and planning from first principles are two particular approaches applying the Causal Theories.<sup id="l1"></sup></p>
</div>
<div class="box" title="Reasoning Formalism">
<h3>Plan Libraries</h3>
<p>Plan libraries are widely used abductive reasoning tools for intention recognition. They are often basing on some knowledge or known facts regarding the observed agent’s tasks. As a simple plan libraries approach, one or many logic elements imply a goal of the observed agent. There are also many extended use of the simple plan libraries, including Hierarchical Task Network (HTN) model of planning which is enhanced by intruding subgoals, such a plan library can be used more flexibly and accurately. Another variation is to assume the observed agent to be BDI (belief-desire-intention)-type. There are test actions included which do not lead to a goal but is used as a condition for a goal to be bellied to be acted towards.<sup id="l1"></sup></p>
<h3>Situation Calculus/Event Calculus</h3>
<p>Situation Calculus and Event Calculus are tools mostly used in deductive approaches. Situation Calculus intrudes procedures of action to the logic where each observation may advance the prediction to the next step. CONGOLOG is a further extension of Situation Calculus, which covers iteration, conditional and while loop to identify goals with more complex actions. However, there is one abductive approach using Event Calculus and planning from first principle, compare to plan libraries, there are more complex logic including preconditions. There is another more flexible approach called WIREC (Weighted Intention Recognition based on Event Calculus) which can be used with our without plan libraries. In this way, it can recognise more comprehensive range of intention as well as focus on some known set of precise plans. The reasoning progresses through graph with matching observations, the paths exit in the libraries if the plan is knows, on the other hand, they can be dynamically formed from Causal Theory. The two mode can be switched on demand at any time.<sup id="l3"></sup></p>
</div>
<div class="box" title="Example">
<p>In this logic based approach implementation for the ‘man in airport’ example, I will use the extended HTN plan libraries model proposed by Jarvis, Lunt, and Myers, because it was designed for application in terrorist intention recognition.<sup id="l4"></sup></p>
<p>In this particular model, the plan libraries are in the form of templates with additional information such as ordering and preconditions of tasks, and frequency and accuracy of observations. A simple formalising of the templates in logic can be expressed as:<sup id="l1"></sup></p>
<span>Destroy (Group, Target, Time) ↔ physical-attack (Group, Target, Time)</span><br>
<span>Physical-attack (Group, Target, Time) ↔ reconnaissance (Group, Target, Time1), prepare-attack (Group, Target, Time2), attack (Group, Target, Time), Time1 < Time, Time2 < Time</span>
<p>Each observed action of the man can be turned into a logic element:</p>
<ul>
<li>Emotion (Nervous)</li>
<li>¬Checked In</li>
<li>Area (Crowded)</li>
<li>Action (Fast, towards bathroom)</li>
<li>Unusual (Left bag behind)</li>
</ul>
<p>In the listed observations, whether he has checked in can be treated as an important precondition, e.g. it is highly unlikely that he is holding a bomb if he has already checked in and passed the security, however, ‘¬Checked in’ in this example does not increase nor decrease his suspicion on planning an attack. Some information can have frequency which correspond to the value of it, for example, since his is in an international airport which could be always busy and crowd, ‘Area (crowded)’ could just be valued as low. Often one action can be part of quite different goals’ element. If the man is aiming to bomb the airport, his goal would cause him to be nervous and leaving without his bag intently. Nevertheless, sickness can result in acting nervously, which then follow by rushing to bathroom, such condition is also understandable for him to left the bag carelessly. In summary of the above reasoning, two goals feeling sick and planting a bomb are considered at the same time and still in process, the man needed to be further observed or providing with information from other sources to recognise his intention.</p>
</div>
<div class="box" title="References">
<ol>
<li>Sadri, F., ‘Intention Recognition in Agents for Ambient Intelligence: Logic-Based Approaches’, Bosse, T. (ed.) Agents and Ambient Intelligence, Ambient Intelligence and Smart Environments, vol. 12, pp. 197–236. IOS Press. (2012)</li>
<li>M.Shanahan, ‘Prediction Is Deduction but Explanation Is Abduction’, Proceedings IJCAI, 89, pp. 1055-1060. (1989)</li>
<li>Kowalski R and Sadri F, ‘Reconciling the Event Calculus with the Situation Calculus’, Journal of Logic Programming, special issue on reasoning about action and change, Vol. 31, 39-58. (1997)</li>
<li>Jarvis, P.A., Lunt, T.F., Myers, K.L., ‘Identifying terrorist activity with AI plan-recognition technology’, AI Magazine, Vol 26, No. 3, 2005, 73-81. (2005)</li>
</ol>
</div>
</div>
<div id="footer">
<span>Adam Hosier</span>
<span>2015</span>
</div>
</div>
</body>
</html>