Vehicular activity recognition based on virtual environment video imagery data
Human activity recognition is a very complex and challenging task, especially for partially observable group activities that occur in confined spaces with limited visual observability and often under severe occultation. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. Nevertheless, this task is vastly challenging, because it demands deep cognitive understanding of spatiotemporal contexts, actions, relationships, objects, events, and interactions, which collectively explain the intricacy of group activities. This problem is even more challenging when such activities are taking place in inherently confined spaces that restrict the observability. The goal of this thesis is to develop a proper inference working model capable of recognizing vehicular activity recognition based on virtual environment video imagery data gathered frame by frame, extract and inference spatiotemporal information pertaining to observations. To achieve this goal, a virtual environment simulation modeling is developed to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. Then, we develop some appropriate image processing techniques for target segmentation and detection. Finally, by processing imagery data associated with in-vehicular activity and conducting rigorous experiments, the verification and validation of the proposed approach are provided.
"Vehicular activity recognition based on virtual environment video imagery data"
ETD Collection for Tennessee State University.