You are not logged in.
Openly accessible

Policy recognition in the Abstract Hidden Markov Model

Bui, Hung H., Venkatesh, Svetha and West, Geoff 2002, Policy recognition in the Abstract Hidden Markov Model, Journal of artificial intelligence research, vol. 17, pp. 451-499, doi: 10.1613/jair.839.

Attached Files
Name Description MIMEType Size Downloads
venkatesh-policyrecognition-2002.pdf Published version application/pdf 496.86KB 66

Title Policy recognition in the Abstract Hidden Markov Model
Author(s) Bui, Hung H.
Venkatesh, SvethaORCID iD for Venkatesh, Svetha orcid.org/0000-0001-8675-6631
West, Geoff
Journal name Journal of artificial intelligence research
Volume number 17
Start page 451
End page 499
Total pages 49
Publisher AI Access Foundation, Inc
Place of publication El Segundo, Calif.
Publication date 2002
ISSN 1076-9757
1943-5037
Keyword(s) computational complexity
decision theory
Markov processes
mathematical models
neural networks
probability
Summary In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.
Notes
Every reasonable effort has been made to ensure that permission has been obtained for items included in Deakin Research Online. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au

Language eng
DOI 10.1613/jair.839
Field of Research 080109 Pattern Recognition and Data Mining
Socio Economic Objective 890205 Information Processing Services (incl. Data Entry and Capture)
HERDC Research category C1.1 Refereed article in a scholarly journal
Copyright notice ©2002, AI Access Foundation
Persistent URL http://hdl.handle.net/10536/DRO/DU:30044252

Document type: Journal Article
Collections: School of Information Technology
Open Access Collection
Connect to link resolver
 
Link to Related Work
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 184 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 237 Abstract Views, 66 File Downloads  -  Detailed Statistics
Created: Thu, 05 Apr 2012, 16:01:10 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.