Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/112331
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, X.-
dc.contributor.authorGao, L.-
dc.contributor.authorWang, P.-
dc.contributor.authorSun, X.-
dc.contributor.authorLiu, X.-
dc.date.issued2018-
dc.identifier.citationIEEE Transactions on Multimedia, 2018; 20(3):634-644-
dc.identifier.issn1520-9210-
dc.identifier.issn1941-0077-
dc.identifier.urihttp://hdl.handle.net/2440/112331-
dc.description.abstract3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 × 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion, which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM/CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets).-
dc.description.statementofresponsibilityXuanhan Wang, Lianli Gao , Peng Wang, Xiaoshuai Sun and Xianglong Liu-
dc.language.isoen-
dc.publisherIEEE-
dc.rights© 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.-
dc.source.urihttp://dx.doi.org/10.1109/tmm.2017.2749159-
dc.subjectAction recognition; 3D convolution neural networks-
dc.titleTwo-stream 3-D convNet Fusion for action recognition in videos with arbitrary size and length-
dc.typeJournal article-
dc.identifier.doi10.1109/TMM.2017.2749159-
pubs.publication-statusPublished-
Appears in Collections:Aurora harvest 8
Electrical and Electronic Engineering publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.