• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 167
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 270
  • 103
  • 80
  • 78
  • 66
  • 50
  • 49
  • 48
  • 47
  • 45
  • 39
  • 38
  • 36
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

The mat sat on the cat : investigating structure in the evaluation of order in machine translation

McCaffery, Martin January 2017 (has links)
We present a multifaceted investigation into the relevance of word order in machine translation. We introduce two tools, DTED and DERP, each using dependency structure to detect differences between the structures of machine-produced translations and human-produced references. DTED applies the principle of Tree Edit Distance to calculate edit operations required to convert one structure into another. Four variants of DTED have been produced, differing in the importance they place on words which match between the two sentences. DERP represents a more detailed procedure, making use of the dependency relations between words when evaluating the disparities between paths connecting matching nodes. In order to empirically evaluate DTED and DERP, and as a standalone contribution, we have produced WOJ-DB, a database of human judgments. Containing scores relating to translation adequacy and more specifically to word order quality, this is intended to support investigations into a wide range of translation phenomena. We report an internal evaluation of the information in WOJ-DB, then use it to evaluate variants of DTED and DERP, both to determine their relative merit and their strength relative to third-party baselines. We present our conclusions about the importance of structure to the tools and their relevance to word order specifically, then propose further related avenues of research suggested or enabled by our work.
172

A Framework for Generative Product Design Powered by Deep Learning and Artificial Intelligence : Applied on Everyday Products

Nilsson, Alexander, Thönners, Martin January 2018 (has links)
In this master’s thesis we explore the idea of using artificial intelligence in the product design process and seek to develop a conceptual framework for how it can be incorporated to make user customized products more accessible and affordable for everyone. We show how generative deep learning models such as Variational Auto Encoders and Generative Adversarial Networks can be implemented to generate design variations of windows and clarify the general implementation process along with insights from recent research in the field. The proposed framework consists of three parts: (1) A morphological matrix connecting several identified possibilities of implementation to specific parts of the product design process. (2) A general step-by-step process on how to incorporate generative deep learning. (3) A description of common challenges, strategies andsolutions related to the implementation process. Together with the framework we also provide a system for automatic gathering and cleaning of image data as well as a dataset containing 4564 images of windows in a front view perspective.
173

Human-informed robotic percussion renderings: acquisition, analysis, and rendering of percussion performances using stochastic models and robotics

Van Rooyen, Robert Martinez 19 December 2018 (has links)
A percussion performance by a skilled musician will often extend beyond a written score in terms of expressiveness. This assertion is clearly evident when comparing a human performance with one that has been rendered by some form of automaton that expressly follows a transcription. Although music notation enforces a significant set of constraints, it is the responsibility of the performer to interpret the piece and “bring it to life” in the context of the composition, style, and perhaps with a historical perspective. In this sense, the sheet music serves as a general guideline upon which to build a credible performance that can carry with it a myriad of subtle nuances. Variations in such attributes as timing, dynamics, and timbre all contribute to the quality of the performance that will make it unique within a population of musicians. The ultimate goal of this research is to gain a greater understanding of these subtle nuances, while simultaneously developing a set of stochastic motion models that can similarly approximate minute variations in multiple dimensions on a purpose-built robot. Live or recorded motion data, and algorithmic models will drive an articulated robust multi-axis mechatronic system that can render a unique and audibly pleasing performance that is comparable to its human counterpart using the same percussion instruments. By utilizing a non-invasive and flexible design, the robot can use any type of drum along with different types of striking implements to achieve an acoustic richness that would be hard if not impossible to capture by sampling or sound synthesis. The flow of this thesis will follow the course of this research by introducing high-level topics and providing an overview of related work. Next, a systematic method for gesture acquisition of a set of well-defined percussion scores will be introduced, followed by an analysis that will be used to derive a set of requirements for motion control and its associated electromechanical subsystems. A detailed multidiscipline engineering effort will be described that culminates in a robotic platform design within which the stochastic motion models can be utilized. An analysis will be performed to evaluate the characteristics of the robotic renderings when compared to human reference performances. Finally, this thesis will conclude by highlighting a set of contributions as well as topics that can be pursued in the future to advance percussion robotics. / Graduate / 2019-12-10
174

Large planetary data visualization using ROAM 2.0

Persson, Anders January 2005 (has links)
The problem of estimating an adequate level of detail for an object for a specific view is one of the important problems in computer 3d-graphics and is especially important in real-time applications. The well-known continuous level-of-detail technique, Real-time Optimally Adapting Meshes (ROAM), has been employed with success for almost 10 years but has at present, due to rapid development of graphics hardware, been found to be inadequate. Compared to many other level-of-detail techniques it cannot benefit from the higher triangle throughput available on graphics cards of today. This thesis will describe the implementation of the new version of ROAM (informally known as ROAM 2.0) for the purpose of massive planetary data visualization. It will show how the problems of the old technique can be bridged to be able to adapt to newer graphics card while still benefiting from the advantages of ROAM. The resulting implementation that is presented here is specialized on spherical objects and handles both texture and geometry data of arbitrary large sizes in an efficient way.
175

Dynamic and Static Approaches for Glyph-Based Visualization of Software Metrics

Majid, Raja January 2008 (has links)
This project presents the research on software visualization techniques. We will introduce the concepts of software visualization, software metrics and our proposed visualization techniques: Static Visualization (glyphs object with static texture) and Dynamic Visualization (glyphs object with moving object). Our intent to study the existing visualization techniques for visualization of software metrics and then proposed the new visualization approach that is more time efficient and easy to perceive by viewer. In this project, we focus on the practical aspects of visualization of multivariate dataset. This project also gives an implementation of proposed visualization techniques of software metrics. In this research based work, we have to compare practically the proposed visualization approaches. We will discuss the software development life cycle of our proposed visualization system, and we will also describe the complete software implementation of implemented software.
176

Contribution à la reconstruction de surfaces complexes à partir d'un grand flot de données non organisées pour la métrologie 3D. / Contribution to complex surfaces reconstruction from large and unorganized datasets for 3D metrology.

El hayek, Nadim 18 December 2014 (has links)
Les surfaces complexes ont des applications dans divers domaines tels que ceux de la photonique, de l'énergie, du biomédical, du transport... Par contre, elles posent de véritables défis quant à leur spécification, fabrication et mesure ainsi que lors de l'évaluation de leur défaut de forme. Les processus de fabrication et de mesure de surfaces complexes sont fortement tributaires des dimensions, des tolérances et des formes spécifiées. Afin de rendre exploitable les informations données par le système de mesure, une étape importante de traitement s'impose. Il s'agit ici de la reconstruction de surfaces afin de reconstituer la géométrie et la topologie de la surface sous-jacente et d'en extraire les informations nécessaires pour des besoins de métrologie dimensionnelle (caractéristiques dimensionnelles et évaluation des défauts de forme). Dans la catégorie des surfaces asphériques pour lesquelles un modèle mathématique est associé, le processus de traitement de données géométriques, non nécessairement organisées, se fait par l'association du modèle aux données. Les résidus d'association recherchés en optique sont typiquement de l'ordre du nanomètre. Dans ce cadre, nous proposons l'utilisation de l'algorithme L-BFGS qui n'a encore jamais été utilisé en métrologie. Ce dernier permet de résoudre des problèmes d'optimisation non-linéaires, sans contraintes et d'une manière robuste, automatique et rapide. La méthode L-BFGS reste efficace pour des données contenant plusieurs millions de points. Dans la catégorie des surfaces gauches et notamment des aubes de turbines, la fabrication, la mesure et le traitement sont à une toute autre échelle, sub-micrométrique. Les surfaces gauches ne sont généralement pas définies par un modèle mathématique mais sont représentées par des modèles paramétriques de type B-Spline et/ou NURBS. Dans ce cadre, nous exposons un état de l'art détaillé et proposons une nouvelle approche itérative d'association B-Spline. L'algorithme s'affranchit de tous les problèmes liés à l'initialisation et au paramétrage initial. Par conséquent, un tel algorithme constitue une nouveauté dans ce domaine. Nous établissons une étude approfondie en évoquant les avantages et les limites actuelles de cette approche sur des exemples de courbes fermées en 2D. Nous complétons ensuite cette étude par des perspectives d'amélioration et de généralisation aux surfaces en 3D. / Complex surfaces exhibit real challenges in regard to their design specification, their manufacturing, their measurement and the evaluation of their manufacturing defects. They are classified according to their geometric/shape complexity as well as to their required tolerance. Thus, the manufacturing and measurement processes used are selected accordingly. In order to transcribe significant information from the measured data, a data processing scheme is essential. Here, processing involves surface reconstruction in the aim of reconstituting the underlying geometry and topology to the points and extracting the necessary metrological information (form and/or dimensional errors). For the category of aspherical surfaces, where a mathematical model is available, the processing of the data, which are not necessarily organized, is done by fitting/associating the aspherical model to the data. The sought precision in optics is typically nanometric. In this context, we propose the L-BFGS optimization algorithm, first time used in metrological applications and which allows solving unconstrained, non-linear optimization problems precisely, automatically and fast. The L-BFGS method remains efficient and performs well even in the presence of very large amounts of data.In the category of general freeform surfaces and particularly turbine blades, the manufacturing, measurement and data processing are all at a different scale and require sub-micrometric precision. Freeform surfaces are generally not defined by a mathematical formula but are rather represented using parametric models such as B-Splines and NURBS. We expose a detailed state-of-the-art review of existing reconstruction algorithms in this field and then propose a new active contour deformation of B-Splines approach. The algorithm is independent of problems related to initialization and initial parameterization. Consequently, it is a new algorithm with promising results. We then establish a thorough study and a series of tests to show the advantages and limitations of our approach on examples of closed curves in the plane. We conclude the study with perspectives regarding improvements of the method and its extension to surfaces in 3D.
177

Modélisation hydraulique à surface libre haute-résolution : utilisation de données topographiques haute-résolution pour la caractérisation du risque inondation en milieux urbains et industriels / High-resolution modelling with bi-dimensional shallow water equations based codes : high-resolution topographic data use for flood hazard assessment over urban and industrial environments

Abily, Morgan 11 December 2015 (has links)
Pour l'évaluation du risque inondation, l’emploi de modèles numériques 2D d’hydraulique à surface libre reposant sur la résolution des équations de Saint-Venant est courant. Ces modèles nécessitent entre autre la description de la topographie de la zone d’étude. Sur des secteurs urbains denses ou des sites industriels, cette topographie complexe peut être appréhendée de plus en plus finement via des technologies dédiées telles que le LiDAR et la photogrammétrie. Les Modèles Numériques d'Elévation Haute Résolution (HR MNE) générés à partir de ces technologies, deviennent employés dans les études d’évaluation du risque inondation. Cette thèse étudie les possibilités, les avantages et les limites, liées à l'intégration des données topographiques HR en modélisation 2D du risque inondation en milieux urbains et industriels. Des modélisations HR de scénarios d'inondation d'origines pluviale ou fluviale sont testés en utilisant des HR MNE crées à partir de données LiDAR et photo-interprétées. Des codes de calculs (Mike 21, Mike 21 FM, TELEMAC-2D, FullSWOF_2D) offrant des moyens différent d'intégration de la donnée HR et basés sur des méthodes numériques variées sont utilisés. La valeur ajoutée de l'intégration des éléments fins du sur-sol impactant les écoulements est démontrée. Des outils pour appréhender les incertitudes liées à l'emploi de ces données HR sont développés et une analyse globale de sensibilité est effectuée. Les cartes d'indices de sensibilité (Sobol) produites soulignent et quantifient l'importance des choix du modélisateur dans la variance des résultats des modèles d'inondation HR ainsi que la variabilité spatiale de l'impact des paramètres incertains testés. / High Resolution (infra-metric) topographic data, including LiDAR photo-interpreted datasets, are becoming commonly available at large range of spatial extent, such as municipality or industrial site scale. These datasets are promising for High-Resolution (HR) Digital Elevation Model (DEM) generation, allowing inclusion of fine aboveground structures that influence overland flow hydrodynamic in urban environment. DEMs are one key input data in Hydroinformatics to perform free surface hydraulic modelling using standard 2D Shallow Water Equations (SWEs) based numerical codes. Nonetheless, several categories of technical and numerical challenges arise from this type of data use with standard 2D SWEs numerical codes. Objective of this thesis is to tackle possibilities, advantages and limits of High-Resolution (HR) topographic data use within standard categories of 2D hydraulic numerical modelling tools for flood hazard assessment purpose. Concepts of HR topographic data and 2D SWE based numerical modelling are recalled. HR modelling is performed for : (i) intense runoff and (ii) river flood event using LiDAR and photo-interpreted datasets. Tests to encompass HR surface elevation data in standard modelling tools ranges from industrial site scale to a megacity district scale (Nice, France). Several standard 2D SWEs based codes are tested (Mike 21, Mike 21 FM, TELEMAC-2D, FullSWOF_2D). Tools and methods for assessing uncertainties aspects with 2D SWE based models are developed to perform a spatial Global Sensitivity Analysis related to HR topographic data use. Results show the importance of modeller choices regarding ways to integrate the HR topographic information in models.
178

Redukce šumu audionahrávek pomocí hlubokých neuronových sítí / Audio noise reduction using deep neural networks

Talár, Ondřej January 2017 (has links)
The thesis focuses on the use of deep recurrent neural network, architecture Long Short-Term Memory for robust denoising of audio signal. LSTM is currently very attractive due to its characteristics to remember previous weights, or edit them not only according to the used algorithms, but also by examining changes in neighboring cells. The work describes the selection of the initial dataset and used noise along with the creation of optimal test data. For network training, the KERAS framework for Python is selected. Candidate networks for possible solutions are explored and described, followed by several experiments to determine the true behavior of the neural network.
179

Segmentace cévního řečiště ve snímcích sítnice metodami hlubokého učení / Blood vessel segmentation in retinal images using deep learning approaches

Serečunová, Stanislava January 2018 (has links)
This diploma thesis deals with the application of deep neural networks with focus on image segmentation. The theoretical part contains a description of deep neural networks and a summary of widely used convolutional architectures for segmentation of objects from the image. Practical part of the work was devoted to testing of an existing network architectures. For this purpose, an open-source software library Tensorflow, implemented in Python programming language, was used. A frequent problem incorporating the use of convolutional neural networks is the requirement on large amount of input data. In order to overcome this obstacle a new data set, consisting of a combination of five freely available databases was created. The selected U-net network architecture was tested by first modification of the newly created data set. Based on the test results, the chosen network architecture has been modified. By these means a new network has been created achieving better performance in comparison to the original network. The modified architecture is then trained on a newly created data set, that contains images of different types taken with various fundus cameras. As a result, the trained network is more robust and allows segmentation of retina blood vessels from images with different parameters. The modified architecture was tested on the STARE, CHASE, and HRF databases. Results were compared with published segmentation methods from literature, which are based on convolutional neural networks, as well as classical segmentation methods. The created network shows a high success rate of retina blood vessels segmentation comparable to state-of-the-art methods.
180

Pořízení a zpracování sbírky registračních značek vozidel / Obtaining and Processing of a Set of Vehicle License Plates

Kvapilová, Aneta January 2019 (has links)
This master thesis focuses on creating and processing a dataset, which contains semi-automatically processed images of vehicles licence plates. The main goal is to create videos and a set of tools, which are able to transform  input videos into a dataset used for traffic monitoring neural networks. Used programming language is Python, graphical library OpenCV and framework PyTorch for implementation of neural network.

Page generated in 0.0548 seconds