Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/131362
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Type: | Conference paper |
Title: | Human-AI interactive and continuous sensemaking: A case study of image classification using scribble attention maps |
Author: | Shen, H. Liao, K. Liao, Z. Doornberg, J. Qiao, M. Van Den Hengel, A. Verjans, J.W. |
Citation: | Cost Engineering (Morgantown), 2021 / Kitamura, Y., Quigley, A., Isbister, K., Igarashi, T. (ed./s), pp.1-8 |
Publisher: | Association for Computing Machinery |
Publisher Place: | New York, NY |
Issue Date: | 2021 |
ISBN: | 9781450380959 |
Conference Name: | International Conference on Human Factors in Computing Systems (CHI) (8 May 2021 - 13 May 2021 : virtual online) |
Editor: | Kitamura, Y. Quigley, A. Isbister, K. Igarashi, T. |
Statement of Responsibility: | Haifeng Shen, Kewen Liao, Zhibin Liao, Job Doornberg, Maoying Qiao, Anton van den Hengel, Johan W. Verjans |
Abstract: | Advances in Artificial Intelligence (AI), especially the stunning achievements of Deep Learning (DL) in recent years, have shown AI/DL models possess remarkable understanding towards the logic reasoning behind the solved tasks. However, human understanding towards what knowledge is captured by deep neural networks is still elementary and this has a detrimental effect on human’s trust in the decisions made by AI systems. Explainable AI (XAI) is a hot topic in both AI and HCI communities in order to open up the blackbox to elucidate the reasoning processes of AI algorithms in such a way that makes sense to humans. However, XAI is only half of human-AI interaction and research on the other half - human’s feedback on AI explanations together with AI making sense of the feedback - is generally lacking. Human cognition is also a blackbox to AI and effective human-AI interaction requires unveiling both blackboxes to each other for mutual sensemaking. The main contribution of this paper is a conceptual framework for supporting effective human-AI interaction, referred to as interactive and continuous sensemaking (HAICS). We further implement this framework in an image classification application using deep Convolutional Neural Network (CNN) classifiers as a browser-based tool that displays network attention maps to the human for explainability and collects human’s feedback in the form of scribble annotations overlaid onto the maps. Experimental results using a real-world dataset has shown significant improvement of classification accuracy (the AI performance) with the HAICS framework. |
Keywords: | interactive sensemaking; explainable AI; image classifcation; attention map; scribble interaction |
Rights: | © 2021 Association for Computing Machinery. |
DOI: | 10.1145/3411763.3451798 |
Published version: | https://dl.acm.org/doi/book/10.1145/3411763 |
Appears in Collections: | Aurora harvest 8 Australian Institute for Machine Learning publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.