• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Leveraging the multimodal information from video content for video recommendation

Almeida, Adolfo Ricardo Lopes De January 2021 (has links)
Since the popularisation of media streaming, a number of video streaming services are continually buying new video content to mine the potential profit. As such, newly added content has to be handled appropriately to be recommended to suitable users. In this dissertation, the new item cold-start problem is addressed by exploring the potential of various deep learning features to provide video recommendations. The deep learning features investigated include features that capture the visual-appearance, as well as audio and motion information from video content. Different fusion methods are also explored to evaluate how well these feature modalities can be combined to fully exploit the complementary information captured by them. Experiments on a real-world video dataset for movie recommendations show that deep learning features outperform hand crafted features. In particular, it is found that recommendations generated with deep learning audio features and action-centric deep learning features are superior to Mel-frequency cepstral coefficients (MFCC) and state-of-the-art improved dense trajectory (iDT) features. It was also found that the combination of various deep learning features with textual metadata and hand-crafted features provide significant improvement in recommendations, as compared to combining only deep learning and hand-crafted features. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2021. / The MultiChoice Research Chair of Machine Learning at the University of Pretoria / UP Postgraduate Masters Research bursary / Electrical, Electronic and Computer Engineering / MEng (Computer Engineering) / Unrestricted
2

[pt] LOCALIZAÇÃO ESPAÇO-TEMPORAL DE ATORES EM VÍDEOS/VÍDEOS 360 E SUAS APLICAÇÕES / [en] SPATIO-TEMPORAL LOCALIZATION OF ACTORS IN VIDEO/360-VIDEO AND ITS APPLICATIONS

13 September 2021 (has links)
[pt] A popularidade de plataformas para o armazenamento e compartilhamento de vídeo tem criado um volume massivo de horas de vídeo. Dado um conjunto de atores presentes em um vídeo, a geração de metadados com a determinação temporal dos intervalos em que cada um desses atores está presente, bem como a localização no espaço 2D dos quadros em cada um desses intervalos pode facilitar a recuperação de vídeo e a recomendação. Neste trabalho, nós investigamos a Clusterização Facial em Vídeo para a localização espaço-temporal de atores. Primeiro descrevemos nosso método de Clusterização Facial em Vídeo em que utilizamos métodos de detecção facial, geração de embeddings e clusterização para agrupar faces dos atores em diferentes quadros e fornecer a localização espaço-temporal destes atores. Então, nós exploramos, propomos, e investigamos aplicações inovadoras dessa localização espaço-temporal em três diferentes tarefas: (i) Reconhecimento Facial em Vídeo, (ii) Recomendação de Vídeos Educacionais e (iii) Posicionamento de Legendas em Vídeos 360 graus. Para a tarefa (i), propomos um método baseado na similaridade de clústeres que é facilmente escalável e obteve um recall de 99.435 por cento e uma precisão de 99.131 por cento em um conjunto de vídeos. Para a tarefa (ii), propomos um método não supervisionado baseado na presença de professores em diferentes vídeos. Tal método não requer nenhuma informação adicional sobre os vídeo e obteve um valor mAP aproximadamente 99 por cento. Para a tarefa (iii), propomos o posicionamento dinâmico de legendas baseado na localização de atores em vídeo 360 graus. / [en] The popularity of platforms for the storage and transmission of video content has created a substantial volume of video data. Given a set of actors present in a video, generating metadata with the temporal determination of the interval in which each actor is present, and their spatial 2D localization in each frame in these intervals can facilitate video retrieval and recommendation. In this work, we investigate Video Face Clustering for this spatio-temporal localization of actors in videos. We first describe our method for Video Face Clustering in which we take advantage of face detection, embeddings, and clustering methods to group similar faces of actors in different frames and provide the spatio-temporal localization of them. Then, we explore, propose, and investigate innovative applications of this spatio-temporal localization in three different tasks: (i) Video Face Recognition, (ii) Educational Video Recommendation and (iii) Subtitles Positioning in 360-video. For (i), we propose a cluster-matching-based method that is easily scalable and achieved a recall of 99.435 percent and precision of 99.131 percent in a small video set. For (ii), we propose an unsupervised method based on them presence of lecturers in different videos that does not require any additional information from the videos and achieved a mAP approximately 99 percent. For (iii), we propose a dynamic placement of subtitles based on the automatic localization of actors in 360-video.

Page generated in 0.1366 seconds