• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Video summary based on rate-distortion criterion

Chou, Chih-Wei 24 July 2008 (has links)
Due to advanced in computer technology¡Avideo data are becoming available in the daily life. The method of managing Multi-media video database is more and more important¡Aand traditional database management for text documents is not suitable for video database; therefore, efficient video database must equip video summary. Video summarization contains a number of key-frame and the key-frame is a simple yet effective form of summarizing a video sequence and the video summarization help user browses rapidly and effectively find out video that the user wants to find. Video summarization except extraction of key-frame has another important key, the number of key-frame. When storage and network bandwidth are limited, the number of key-frame must conform to the limit condition and as far as possible find the representative key-frame. Video summarization is important topic for managing Multi-media video. The number of key-frame in video summarization is related to distortion between video summarization and original video sequence. The number of key-frame is more, the distortion between video summarization and original video sequence is smaller. This paper emphasizes key-frame extraction and the rate of key-frame. First the user inputs the number of key-frame and then extracts the key-frame that has smallest distortion between original video sequence in key-frame number limit situation. In order to understand the entire video structure¡Athe Normalized the graph cuts(NCuts) group method is carried out to cluster similar video paragraph. The resulting clusters form a direction temporal graph and a shortest path algorithm is proposed to find main structure of video. The performance of the proposed method is demonstrated by experiments on a collection of videos from Open Vide Project. We provided a meaningful comparison between results of the proposed summarization with Open Vide storyboard and the PME based approach.
2

Improved monocular videogrammetry for generating 3D dense point clouds of built infrastructure

Rashidi, Abbas 27 August 2014 (has links)
Videogrammetry is an affordable and easy-to-use technology for spatial 3D scene recovery. When applied to the civil engineering domain, a number of issues have to be taken into account. First, videotaping large scale civil infrastructure scenes usually results in large video files filled with blurry, noisy, or simply redundant frames. This is often due to higher frame rate over camera speed ratio than necessary, camera and lens imperfections, and uncontrolled motions of the camera that results in motion blur. Only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames is a tough challenge. Second, the generated point cloud using a monocular videogrammetric pipeline is up to scale, i.e. the user has to know at least one dimension of an object in the scene to scale up the entire scene. This issue significantly narrows applications of generated point clouds in civil engineering domain since measurement is an essential part of every as-built documentation technology. Finally, due to various reasons including the lack of sufficient coverage during videotaping of the scene or existence of texture-less areas which are common in most indoor/outdoor civil engineering scenes, quality of the generated point clouds are sometimes poor. This deficiency appears in the form of outliers or existence of holes or gaps on surfaces of point clouds. Several researchers have focused on this particular problem; however, the major issue with all of the currently existing algorithms is that they basically treat holes and gaps as part of a smooth surface. This approach is not robust enough at the intersections of different surfaces or corners while there are sharp edges. A robust algorithm for filling holes/gaps should be able to maintain sharp edges/corners since they usually contain useful information specifically for applications in the civil and infrastructure engineering domain. To tackle these issues, this research presents and validates an improved videogrammetric pipeline for as built documentation of indoor/outdoor applications in civil engineering areas. The research consists of three main components: 1. Optimized selection of key frames for processing. It is necessary to choose a number of informative key frames to get the best results from the videogrammetric pipeline. This step is particularly important for outdoor environments as it is impossible to process a large number of frames existing in a large video clip. 2. Automated calculation of absolute scale of the scene. In this research, a novel approach for the process of obtaining absolute scale of points cloud by using 2D and 3D patterns is proposed and validated. 3. Point cloud data cleaning and filling holes on the surfaces of generated point clouds. The proposed algorithm to achieve this goal is able to fill holes/gaps on surfaces of point cloud data while maintaining sharp edges. In order to narrow the scope of the research, the main focus will be on two specific applications: 1. As built documentation of bridges and building as outdoor case studies. 2. As built documentation of offices and rooms as indoor case studies. Other potential applications of monocular videogrammetry in the civil engineering domain are out of scope of this research. Two important metrics, i.e. accuracy, completeness and processing time, are utilized for evaluation of the proposed algorithms.
3

Avatar animation from SignWriting notation

Abrahams, Kenzo January 2015 (has links)
>Magister Scientiae - MSc / The SASL project at the University of the Western Cape is in the process of developing a machine translation system that can translate fully-fledged phrases between South African Sign Language (SASL) and English in real-time.To visualise sign language,the system aims to make use of a 3D humanoid avatar created by van Wyk. Moemedi used this avatar to create an animation system that visualises a small set of simple Phrases from very simple SignWriting notation input. This research aims to achieve an animation system that can render full sign language sentences given complex SignWriting notation glyphs with multiple sections. The specific focus of the research is achieving animations that are accurate representations of the SignWriting input in terms of the five fundamental parameters of sign language, namely, hand motion, location, orientation and shape, as well as non-manual features such as facial expressions. An experiment Was carried out to determine the accuracy of the proposed system on a set of 20 SASL phrases annotated with SignWriting notation. It was found that the proposed system is highly accurate, achieving an average accuracy of 81.6%.
4

Extraction of Key-Frames from an Unstable Video Feed

Vempati, Nikhilesh 28 September 2017 (has links) (PDF)
The APOLI project deals with Automated Power Line Inspection using Highly-automated Unmanned Aerial Systems. Beside the Real-time damage assessment by on-board high-resolution image data exploitation a postprocessing of the video data is necessary. This Master Thesis deals with the implementation of an Isolator Detector Framework and a Work ow in the Automotive Data and Time-triggered Framework(ADTF) that loads a video direct from a camera or from a storage and extracts the Key Frames which contain objects of interest. This is done by the implementation of an object detection system using C++ and the creation of ADTF Filters that perform the task of detection of the objects of interest and extract the Key Frames using a supervised learning platform. The use case is the extraction of frames from video samples that contain Images of Isolators from Power Transmission Lines.
5

Key-Frame Based Video Super-Resolution for Hybrid Cameras

Lengyel, Robert 11 1900 (has links)
This work focuses on the high frequency restoration of video sequences captured by a hybrid camera, using key-frames as high frequency samples. The proposed method outlines a hierarchy to the super-resolution process, and is aimed at maximizing both speed and performance. Additionally, an advanced image processing simulator (EngineX) was developed to fine tune the algorithm. / Super-resolution algorithms are designed to enhance the detail level of a particular image or video sequence. However, it is very difficult to achieve in practice due to the problem being ill-posed, and often requires regularization based on assumptions about texture or edges. The process can be aided using high-resolution key-frames such as those generated from a hybrid camera. A hybrid camera is capable of capturing footage in multiple spatial and temporal resolutions. The typical output consists of a high resolution stream captured at low frame rate, and a low resolution stream captured at a high frame rate. Key-frame based super-resolution algorithms exploit the spatial and temporal correlation between the high resolution and low resolution streams to reconstruct a high resolution and high frame rate output stream. The proposed algorithm outlines a hierarchy to the super-resolution process, combining several different classical and novel methods. A residue formulation decides which pixels are required to be further reconstructed if a particular hierarchy stage fails to provide the expected results when compared to the low resolution prior. The hierarchy includes the optical flow based estimation which warps high frequency information from adjacent key-frames to the current frame. Specialized candidate pixel selection reduces the total number of pixels considered in the NLM stage. Occlusion is handled by a final fallback stage in the hierarchy. Additionally, the running time for a CIF sequence of 30 frames has been significantly reduced to within 3 minutes by identifying which pixels require reconstruction with a particular method. A custom simulation environment implements the proposed method as well as many common image processing algorithms. EngineX provides a graphical interface where video sequences and image processing methods can be manipulated and combined. The framework allows for advanced features such as multithreading, parameter sweeping, and block level abstraction which aided the development of the proposed super-resolution algorithm. Both speed and performance were fine tuned using the simulator which is the key to its improved quality over other traditional super-resolution schemes. / Thesis / Master of Applied Science (MASc)
6

Unsupervised Video Summarization Using Adversarial Graph-Based Attention Network

Gunuganti, Jeshmitha 05 June 2023 (has links)
No description available.
7

Title-based video summarization using attention networks

Li, Changwei 23 August 2022 (has links)
No description available.
8

Rendering an avatar from sign writing notation for sign language animation

Moemedi, Kgatlhego Aretha January 2010 (has links)
<p>This thesis presents an approach for automatically generating signing animations from a sign language notation. An avatar endowed with expressive gestures, as subtle as changes in facial expression, is used to render the sign language animations. SWML, an XML format of SignWriting is provided as input. It transcribes sign language gestures in a format compatible to virtual signing. Relevant features of sign language gestures are extracted from the SWML. These features are then converted to body animation pa- rameters, which are used to animate the avatar. Using key-frame animation techniques, intermediate key-frames approximate the expected sign language gestures. The avatar then renders the corresponding sign language gestures. These gestures are realistic and aesthetically acceptable and can be recognized and understood by Deaf people.</p>
9

Rendering an avatar from sign writing notation for sign language animation

Moemedi, Kgatlhego Aretha January 2010 (has links)
<p>This thesis presents an approach for automatically generating signing animations from a sign language notation. An avatar endowed with expressive gestures, as subtle as changes in facial expression, is used to render the sign language animations. SWML, an XML format of SignWriting is provided as input. It transcribes sign language gestures in a format compatible to virtual signing. Relevant features of sign language gestures are extracted from the SWML. These features are then converted to body animation pa- rameters, which are used to animate the avatar. Using key-frame animation techniques, intermediate key-frames approximate the expected sign language gestures. The avatar then renders the corresponding sign language gestures. These gestures are realistic and aesthetically acceptable and can be recognized and understood by Deaf people.</p>
10

Extraction of Key-Frames from an Unstable Video Feed

Vempati, Nikhilesh 13 July 2017 (has links)
The APOLI project deals with Automated Power Line Inspection using Highly-automated Unmanned Aerial Systems. Beside the Real-time damage assessment by on-board high-resolution image data exploitation a postprocessing of the video data is necessary. This Master Thesis deals with the implementation of an Isolator Detector Framework and a Work ow in the Automotive Data and Time-triggered Framework(ADTF) that loads a video direct from a camera or from a storage and extracts the Key Frames which contain objects of interest. This is done by the implementation of an object detection system using C++ and the creation of ADTF Filters that perform the task of detection of the objects of interest and extract the Key Frames using a supervised learning platform. The use case is the extraction of frames from video samples that contain Images of Isolators from Power Transmission Lines.

Page generated in 0.0339 seconds