Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/129607
Type: Thesis
Title: Deep Learning Methods for Human Activity Recognition using Wearables
Author: Abedin, Alireza
Issue Date: 2020
School/Discipline: School of Computer Science
Abstract: Wearable sensors provide an infrastructure-less multi-modal sensing method. Current trends point to a pervasive integration of wearables into our lives with these devices providing the basis for wellness and healthcare applications across rehabilitation, caring for a growing older population, and improving human performance. Fundamental to these applications is our ability to automatically and accurately recognise human activities from often tiny sensors embedded in wearables. In this dissertation, we consider the problem of human activity recognition (HAR) using multi-channel time-series data captured by wearable sensors. Our collective know-how regarding the solution of HAR problems with wearables has progressed immensely through the use of deep learning paradigms. Nevertheless, this field still faces unique methodological challenges. As such, this dissertation focuses on developing end-to-end deep learning frameworks to promote HAR application opportunities using wearable sensor technologies and to mitigate specific associated challenges. In our efforts, the investigated problems cover a diverse range of HAR challenges and spans from fully supervised to unsupervised problem domains. In order to enhance automatic feature extraction from multi-channel time-series data for HAR, the problem of learning enriched and highly discriminative activity feature representations with deep neural networks is considered. Accordingly, novel end-to-end network elements are designed which: (a) exploit the latent relationships between multi-channel sensor modalities and specific activities, (b) employ effective regularisation through data-agnostic augmentation for multi-modal sensor data streams, and (c) incorporate optimization objectives to encourage minimal intra-class representation differences, while maximising inter-class differences to achieve more discriminative features. In order to promote new opportunities in HAR with emerging battery-less sensing platforms, the problem of learning from irregularly sampled and temporally sparse readings captured by passive sensing modalities is considered. For the first time, an efficient set-based deep learning framework is developed to address the problem. This framework is able to learn directly from the generated data, bypassing the need for the conventional interpolation pre-processing stage. In order to address the multi-class window problem and create potential solutions for the challenging task of concurrent human activity recognition, the problem of enabling simultaneous prediction of multiple activities for sensory segments is considered. As such, the flexibility provided by the emerging set learning concepts is further leveraged to introduce a novel formulation of HAR. This formulation treats HAR as a set prediction problem and elegantly caters for segments carrying sensor data from multiple activities. To address this set prediction problem, a unified deep HAR architecture is designed that: (a) incorporates a set objective to learn mappings from raw input sensory segments to target activity sets, and (b) precedes the supervised learning phase with unsupervised parameter pre-training to exploit unlabelled data for better generalisation performance. In order to leverage the easily accessible unlabelled activity data-streams to serve downstream classification tasks, the problem of unsupervised representation learning from multi-channel time-series data is considered. For the first time, a novel recurrent generative adversarial (GAN) framework is developed that explores the GAN’s latent feature space to extract highly discriminating activity features in an unsupervised fashion. The superiority of the learned representations is substantiated by their ability to outperform the de facto unsupervised approaches based on autoencoder frameworks. At the same time, they rival the recognition performance of fully supervised trained models on downstream classification benchmarks. In recognition of the scarcity of large-scale annotated sensor datasets and the tediousness of collecting additional labelled data in this domain, the hitherto unexplored problem of end-to-end clustering of human activities from unlabelled wearable data is considered. To address this problem, a first study is presented for the purpose of developing a stand-alone deep learning paradigm to discover semantically meaningful clusters of human actions. In particular, the paradigm is intended to: (a) leverage the inherently sequential nature of sensory data, (b) exploit self-supervision from reconstruction and future prediction tasks, and (c) incorporate clustering-oriented objectives to promote the formation of highly discriminative activity clusters. The systematic investigations in this study create new opportunities for HAR to learn human activities using unlabelled data that can be conveniently and cheaply collected from wearables.
Advisor: Ranasinghe, Damith Chinthana
Dissertation Note: Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2020
Keywords: Deep learning
human activity recognition
wearable sensors
Provenance: This electronic version is made publicly available by the University of Adelaide in accordance with its open access policy for student theses. Copyright in this thesis remains with the author. This thesis may incorporate third party material which has been used by the author pursuant to Fair Dealing exceptions. If you are the owner of any included third party copyright material you wish to be removed from this electronic version, please complete the take down form located at: http://www.adelaide.edu.au/legals
Appears in Collections:Research Theses

Files in This Item:
File Description SizeFormat 
Abedin2020_PhD.pdf18.71 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.