"> Interpretable Action Recognition on Hard to Classify Actions

Activity frames illustrating the 5 phases for the action ’Putting something into something’. (red and blue bounding boxes indicate objects and light blue the hand)

Abstract

We investigate a human-like interpretable model of video understanding. Humans recognise complex activities in video by recognising critical spatio-temporal relations among explicitly recognised objects and parts, for example, an object entering the aperture of a container. To mimic this we build on a model which uses positions of objects and hands, and their motions, to recognise the activity taking place. To improve this model we focussed on three of the most confused classes (for this model) and identified that the lack of 3D information was the major problem. To address this we extended our basic model by adding 3D awareness in two ways: (1) A state-of-the-art object detection model was fine-tuned to determine the difference between “Container” and “NotContainer” in order to integrate object shape information into the existing object features. (2) A state-of-the-art depth estimation model was used to extract depth values for individual objects and calculate depth relations to expand the existing relations used our interpretable model. These 3D extensions to our basic model were evaluated on a subset of three superficially similar “Putting” actions from the Something-Something-v2 dataset. The results showed that the container detector did not improve performance, but the addition of depth relations made a significant improvement to performance.

Poster

BibTeX

@inproceedings{Anichenko:InterpretActions:ECCVWS:2024,
        AUTHOR = Anichenko, Anastasia and Guerin, Frank and Gilbert, Andrew ",
        TITLE = "Interpretable Action Recognition on Hard to Classify Actions",
        BOOKTITLE = " The European Conference of Computer Vision 2024, Human-inspired Computer Vision Workshop",
        YEAR = "2024",
        }