• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 11
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 47
  • 29
  • 22
  • 17
  • 17
  • 15
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Générateur de coprocesseur pour le traitement de données en flux (vidéo ou similaire) sur FPGA. / CoProcessor generator for real-time data flow processing FPGA

Goavec-Merou, Gwenhael 26 November 2014 (has links)
L’utilisation de matrice de portes logiques reconfigurables (FPGA) est une des seules solutionspour traiter des flux de plusieurs 100 MÉchantillons/seconde en temps-réel. Toutefois, ce typede composant présente une grande difficulté de mise en oeuvre : au delà d’un type langage spécifique,c’est tout un environnement matériel et une certaine expérience qui sont requis pourobtenir les traitements les plus efficaces. Afin de contourner cette difficulté, de nombreux travauxont été réalisés dans le but de proposer des solutions qui, partant d’un code écrit dans unlangage de haut-niveau, vont produire un code dans un langage dédié aux FPGAs. Nos travaux,suivant l’approche d’assemblage de blocs et en suivant la méthode du skeleton, ont visé à mettreen place un logiciel, nommé CoGen, permettant, à partir de codes déjà développés et validés,de construire des chaînes de traitements en tenant compte des caractéristiques du FPGA cible,du débit entrant et sortant de chaque bloc pour garantir l’obtention d’une solution la plus adaptéepossible aux besoins et contraintes. Les implémentations des blocs de traitements sont soitgénérés automatiquement soit manuellement. Les entrées-sorties de chaque bloc doivent respecterune norme pour être exploitable dans l’outil. Le développeur doit fournir une descriptionconcernant les ressources nécessaires et les limitations du débit de données pouvant être traitées.CoGen fournit à l’utilisateur moins expérimenté une méthode d’assemblage de ces blocsgarantissant le synchronisme et cohérence des flux de données ainsi que la capacité à synthétiserle code sur les ressources matérielles accessibles. Cette méthodologie de travail est appliquéeà des traitements sur des flux vidéos (seuillage, détection de contours et analyse des modespropres d’un diapason) et sur des flux radio-fréquences (interrogation d’un capteur sans-fils parméthode RADAR, réception d’un flux modulé en fréquence, et finalement implémentation deblocs de bases pour déporter le maximum de traitements en numérique). / Using Field Programmable Gate Arrays (FPGA) is one of the very few solution for real time processingdata flows of several hundreds of Msamples/second. However, using such componentsis technically challenging beyond the need to become familiar with a new kind of dedicateddescription language and ways of describing algorithms, understanding the hardware behaviouris mandatory for implementing efficient processing solutions. In order to circumvent these difficulties,past researches have focused on providing solutions which, starting from a description ofan algorithm in a high-abstraction level language, generetes a description appropriate for FPGAconfiguration. Our contribution, following the strategy of block assembly based on the skeletonmethod, aimed at providing a software environment called CoGen for assembling various implementationsof readily available and validated processing blocks. The resulting processing chainis optimized by including FPGA hardware characteristics, and input and output bandwidths ofeach block in order to provide solution fitting best the requirements and constraints. Each processingblock implementation is either generated automatically or manually, but must complywith some constraints in order to be usable by our tool. In addition, each block developer mustprovide a standardized description of the block including required resources and data processingbandwidth limitations. CoGen then provides to the less experienced user the means to assemblethese blocks ensuring synchronism and consistency of data flow as well as the ability to synthesizethe processing chain in the available hardware resources. This working method has beenapplied to video data flow processing (threshold, contour detection and tuning fork eigenmodesanalysis) and on radiofrequency data flow (wireless interrogation of sensors through a RADARsystem, software processing of a frequency modulated stream, software defined radio).
62

Vibration-Based Structural Health Monitoring of Structures Using a New Algorithm for Signal Feature Extraction and Investigation of Vortex-Induced Vibrations

Qarib, Hossein January 2020 (has links)
No description available.
63

Sumarizace obsahu videí / Video Content Summarization

Jaška, Roman January 2018 (has links)
The amount surveillance footage recorded each day is too large for human operators to analyze. A video summary system to process and refine this video data would prove beneficial in many instances. This work defines the problem in terms of its inputs, outputs and sub-problems, identifies suitable techniques and existing works as well as describes a design of such system. The system is implemented, and the results are examined.
64

Anonymizace videa / Video Anonymization

Mokrý, Martin January 2019 (has links)
The goal of this thesis is to design and create an automatic system for video anonymization. This system makes use of various object detectors on an image to ensure functionality, as well as active tracking of objects detected in this manner. Adjustments are later applied to these detected objects which ensure sufficient level of anonymization. The main asset of this system is speeding up the anonymization process of videos that can be published after.
65

On-line Analýza Dat s Využitím Vizuálních Slovníků / On-line Data Analysis Based on Visual Codebooks

Beran, Vítězslav Unknown Date (has links)
Práce představuje novou adaptabilní metodu pro on-line vyhledávání videa v reálném čase pomocí vizuálních slovníků. Nová metoda se zaměřuje na nízkou výpočetní náročnost a přesnost vyhledání při on-line použití. Metoda vychází z technik využitých u statických vizuálních slovníků. Tyto běžné techniky jsou upraveny tak, aby byly schopné se adaptovat na proměnlivá data. Postupy, které toto u nové metody řeší, jsou - dynamická inverzní frekvence dokumentů, adaptabilní vizuální slovník a proměnlivý invertovaný index. Navržený postup byl vyhodnocen na úloze vyhledávání videa a prezentované výsledky ukazují, jaké vlastnosti má adaptabilní metoda ve srovnání se statickým přístupem. Nová adaptabilní metoda je založena na konceptu plovoucího okna, který definuje, jakým způsobem se vybírají data pro adaptaci a ke zpracování. Společně s konceptem je definován i matematický aparát, který umožňuje vyhodnotit, jak koncept nejlépe využít pro různé metody zpracování videa. Praktické využití adaptabilní metody je konkrétně u systémů pro zpracování videa, kde se očekává změna v charakteru vizuálních dat nebo tam, kde není předem známo, jakého charakteru vizuální data budou.
66

HDR video "plugin" pro Adobe Premier / HDR Video Plugin for Adobe Premier

Svatý, Lukáš January 2015 (has links)
Cílem práce je vytvoření podpory pro editovaní videa v HDR formátu. Pro editaci videa je zvolen program Adobe Premiere Pro a na dosažení požadovaného výsledku je vytvořen plugin do zmiňovaného softwaru, který poskytuje požadovanou funkcionalitu. V práci jsou vysvětleny principy vytváření, zobrazení a ukládáni obsahu z rozšíreným dynamickým rozsahem. Zároveň jsou vysvětleny principy vytváření pluginů pro Adobe Premiere Pro za pomoci SDK verze CS6. V praktické části této práce jsou vysvětleny detaily implementace, problémy, které byly řešené, a popis samotného pluginu. Návrh pluginů je vytvořen tak, aby byla možná další práce na tomto softwaru, přidaní další funkcionality a pro umožnění využití tohoto díla na rozvoji obsahu s rozšíreným dynamickým rozsahem.
67

Prednasky.com - systém pro automatické zpracování přednášek / Prednasky.com - system for automatic lecture processing

Černý, Pavel January 2015 (has links)
The objective of this thesis is to create system for automated processing of the video lectures. There are two main parts in this thesis. Bash Framework, which is created from several separated atomic tasks. At the beginning we have a simple video prepared for processing. When it is processed, there are added intro part, subtitles and synchronized presentation. Second part of this thesis is a web application. It is designated for playing, editing and creating videos. Further it provides process management of created framework and whole system administration.
68

Contribution à la perception augmentée de scènes dynamiques : schémas temps réels d’assimilation de données pour la mécanique du solide et des structures / Contribution to augmented observation of dynamic scenes : real time data assimilation schemes for solid and structure mechanics

Goeller, Adrien 19 January 2018 (has links)
Dans le monde industriel comme dans le monde scientifique, le développement de capteurs a toujours répondu à la volonté d’observer l’inobservable. La caméra rapide fait partie de ceux-là puisqu’elle permet de dévoiler des dynamiques invisibles, de la formation de fissure au vol du moustique. Dans un environnement extrêmement concurrentiel, ces caméras sont principalement limitées par le nombre d’images acquises par seconde. Le but de cette thèse est d’augmenter la capacité de dévoiler la dynamique invisible en enrichissant l’acquisition initiale par des modèles dynamiques. La problématique consiste alors à élaborer des méthodes permettant de relier en temps réel un modèle et la perception d’un système réel. Les bénéfices de cette utilisation offrent ainsi la possibilité de faire de l’interpolation, de la prédiction et de l’identification. Cette thèse est composée de trois parties. La première est axée sur la philosophie du traitement vidéo et propose d’utiliser des modèles élémentaires et génériques. Un algorithme d’estimation de grands mouvements est proposé mais l’approche actuellement proposée n’est pas assez générique pour être exploitée dans un contexte industriel. La deuxième partie propose d’utiliser des méthodes d’assimilation de données séquentielle basées sur la famille des filtres de Kalman afin d’associer un modèle avec des observations par caméras rapides pour des systèmes mécaniques. La troisième partie est une application à l’analyse modale expérimentale non linéaire. Deux schémas d’assimilation temps réel multicapteurs sont présentés et leur mise en œuvre est illustrée pour de la reconstruction 3D et de la magnification. / The development of sensors has always followed the ambition of industrial and scientific people to observe the unobservable. High speed cameras are part of this adventure, revealing invisible dynamics such as cracks formation or subtle mosquito flight. Industrial high speed vision is a very competitive domain in which cameras stand out through their acquisition speed. This thesis aims to broaden their capacity by augmenting the initial acquisition with dynamic models. This work proposes to develop methods linking in real time a model with a real system. Aimed benefits are interpolation, prediction and identification. Three parts are developed. The first one is based on video processing and submits to use kinematic elementary and generic models. An algorithm of motion estimation for large movements is proposed but the generic nature does not allow a sufficient knowledge to be conclusive. The second part proposes using sequential data assimilation methods known as Kalman filters. A scheme to assimilate video data with a mechanical model is successfully implemented. An application of data assimilation in modal analysis is developed. Two multi sensors real time assimilation schemes for nonlinear modal identification are proposed. These schemes are integrated in two applications on 3D reconstruction and motion magnification.
69

Semantic content analysis for effective video segmentation, summarisation and retrieval.

Ren, Jinchang January 2009 (has links)
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications. / EU IST FP6 Project
70

Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation.

Chen, Juan January 2009 (has links)
Recent research approaches in semantics based video content analysis require shot boundary detection as the first step to divide video sequences into sections. Furthermore, with the advances in networking and computing capability, efficient retrieval of multimedia data has become an important issue. Content-based retrieval technologies have been widely implemented to protect intellectual property rights (IPR). In addition, automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this thesis, a paradigm is proposed to segment, retrieve and interpret digital videos. Five algorithms are presented to solve the video segmentation task. Firstly, a simple shot cut detection algorithm is designed for real-time implementation. Secondly, a systematic method is proposed for shot detection using content-based rules and FSM (finite state machine). Thirdly, the shot detection is implemented using local and global indicators. Fourthly, a context awareness approach is proposed to detect shot boundaries. Fifthly, a fuzzy logic method is implemented for shot detection. Furthermore, a novel analysis approach is presented for the detection of video copies. It is robust to complicated distortions and capable of locating the copy of segments inside original videos. Then, iv objects and events are extracted from MPEG Sequences for Video Highlights Indexing and Retrieval. Finally, a human fighting detection algorithm is proposed for movie annotation.

Page generated in 0.063 seconds