Region-based activity recognition using conditional CAN

Document Type

Conference Proceeding

Publication Date

10-23-2017

Journal

MM 2017 - Proceedings of the 2017 ACM Multimedia Conference

DOI

10.1145/3123266.3123365

Keywords

Activity recognition; Deep learning; Generative adversarial network; Localization

Abstract

© 2017 Association for Computing Machinery. We present a method for activity recognition that first estimates the activity performer's location and uses it with input data for activity recognition. Existing approaches directly take video frames or entire video for feature extraction and recognition, and treat the classifier as a black box. Our method first locates the activities in each input video frame by generating an activity mask using a conditional generative adversarial network (cGAN). The generated mask is appended to color channels of input images and fed into a VGG-LSTM network for activity recognition. To test our system, we produced two datasets with manually created masks, one containing Olympic sports activities and the other containing trauma resuscitation activities. Our system makes activity prediction for each video frame and achieves performance comparable to the state-of-the-art systems while simultaneously outlining the location of the activity. We show how the generated masks facilitate the learning of features that are representative of the activity rather than accidental surrounding information.

This document is currently not available here.

Share

COinS