Return to search

Modélisation de contextes pour l'annotation sémantique de vidéos / Context based modeling for video semantic annotation

Recent years have witnessed an explosion of multimedia contents available. In 2010 the video sharing website YouTube announced that 35 hours of videos were uploaded on its site every minute, whereas in 2008 users were "only" uploading 12 hours of video per minute. Due to the growth of data volumes, human analysis of each video is no longer a solution; there is a need to develop automated video analysis systems. This thesis proposes a solution to automatically annotate video content with a textual description. The thesis core novelty is the consideration of multiple contextual information to perform the annotation. With the constant expansion of visual online collections, automatic video annotation has become a major problem in computer vision. It consists in detecting various objects (human, car. . . ), dynamic actions (running, driving. . . ) and scenes characteristics (indoor, outdoor. . . ) in unconstrained videos. Progress in this domain would impact a wild range of applications including video search, video intelligent surveillance or human-computer interaction.Although some improvements have been shown in concept annotation, it still remains an unsolved problem, notably because of the semantic gap. The semantic gap is defined as the lack of correspondences between video features and high-level human understanding. This gap is principally due to the concepts intra-variability caused by photometry change, objects deformation, objects motion, camera motion or viewpoint change... To tackle the semantic gap, we enrich the description of a video with multiple contextual information. Context is defined as "the set of circumstances in which an event occurs". Video appearance, motion or space-time distribution can be considered as contextual clues associated to a concept. We state that one context is not informative enough to discriminate a concept in a video. However, by considering several contexts at the same time, we can address the semantic gap. / Recent years have witnessed an explosion of multimedia contents available. In 2010the video sharing website YouTube announced that 35 hours of videos were uploadedon its site every minute, whereas in 2008 users were "only" uploading 12 hours ofvideo per minute. Due to the growth of data volumes, human analysis of each videois no longer a solution; there is a need to develop automated video analysis systems.This thesis proposes a solution to automatically annotate video content with atextual description. The thesis core novelty is the consideration of multiple contex-tual information to perform the annotation.With the constant expansion of visual online collections, automatic video annota-tion has become a major problem in computer vision. It consists in detecting variousobjects (human, car. . . ), dynamic actions (running, driving. . . ) and scenes charac-teristics (indoor, outdoor. . . ) in unconstrained videos. Progress in this domain wouldimpact a wild range of applications including video search, video intelligent surveil-lance or human-computer interaction.Although some improvements have been shown in concept annotation, it still re-mains an unsolved problem, notably because of the semantic gap. The semantic gapis defined as the lack of correspondences between video features and high-level humanunderstanding. This gap is principally due to the concepts intra-variability causedby photometry change, objects deformation, objects motion, camera motion or view-point change. . .To tackle the semantic gap, we enrich the description of a video with multiplecontextual information. Context is defined as "the set of circumstances in which anevent occurs". Video appearance, motion or space-time distribution can be consid-ered as contextual clues associated to a concept. We state that one context is notinformative enough to discriminate a concept in a video. However, by consideringseveral contexts at the same time, we can address the semantic gap.

Identiferoai:union.ndltd.org:theses.fr/2013ENMP0051
Date12 November 2013
CreatorsBallas, Nicolas
ContributorsParis, ENMP, Prêteux, Françoise, Delezoide, Bertrand
Source SetsDépôt national des thèses électroniques françaises
LanguageFrench
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation, Text

Page generated in 0.0019 seconds