• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 30
  • 11
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 276
  • 105
  • 80
  • 79
  • 67
  • 51
  • 51
  • 49
  • 49
  • 46
  • 40
  • 38
  • 36
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Modelling of tsunami generated by submarine landslides

Sue, Langford Phillip January 2007 (has links)
Tsunami are a fascinating but potentially devastating natural phenomena that have occurred regularly throughout history along New Zealand's shorelines, and around the world. With increasing population and the construction of infrastructure in coastal zones, the effect of these large waves has become a major concern. Many natural phenomena are capable of creating tsunami. Of particular concern is the underwater landslide-induced tsunami, due to the potentially short warning before waves reach the shore. The aims of this research are to generate a quality benchmark dataset suitable for comprehensive comparisons with numerical model results and to increase our understanding of the physical processes involved in tsunami generation. The two-dimensional experimental configuration is based on a benchmark configuration described in the scientific literature, consisting of a semi-elliptical prism sliding down a submerged 15° slope. A unique feature of these experiments is the method developed to measure water surface variation continuously in both space and time. Water levels are obtained using an optical technique based on laser induced fluorescence, which is shown to be comparable in accuracy and resolution to traditional electrical point wave gauges. In the experiments, the landslide density and initial submergence are varied and detailed measurements of wave heights, lengths, propagation speeds, and shore run-up are made. Particle tracking velocimetry is used to record the landslide kinematics and sub-surface water velocities. Particular attention is paid to maintaining a high level of test repeatability throughout the experimental process. The experimental results show that a region of high pressure ahead of the landslide forces up the water over the front half of the landslide to form the leading wave crest, which propagates ahead of the landslide. The accelerating fluid above, and the turbulent wake behind, the moving landslide create a region of low pressure, which draws down the water surface above the rear half of the landslide to form the leading trough. Differences in the phase and group velocities of the components in the wave packet cause waves to be continually generated on the trailing end of the wave train. The downstream position that these waves form continually moves downstream with time and the wave packet is found to be highly dispersive. The interaction of the landslide pressure field with the free surface wave pressure field is important, as the location of the low pressure around the landslide relative to the wave field acts to reinforce or suppress the waves above. This has a substantial effect on the increase or decrease in wave potential energy. When the low pressure acts to draw down a wave trough, the wave potential energy increases. When the low pressure is below a wave crest, it acts to suppress the crest amplitude, leading to an overall decrease in wave potential energy. Measurements of the efficiency of energy transfer from the landslide to the wave field show that the ratio of maximum wave potential energy to maximum landslide kinetic energy is between 0.028 and 0.138, and tends to increase for shallower initial landslide submergences and heavier specific gravities. The ratio of maximum wave potential energy to maximum landslide potential energy ranges between 0.011 and 0.059 and tends to be greater for shallower initial submergences. For two experimental configurations the ratio of maximum wave potential energy to maximum fluid kinetic energy is estimated to be 0.435 and 0.588. The wave trough initially generated above the rear end of the landslide propagates in both onshore and offshore directions. The onshore-propagating trough causes a large initial draw-down at the shore. The magnitude of the maximum draw-down is related to the maximum amplitude of the offshore-propagating first wave trough. A wave crest generated by the landslide as it decelerates at the bottom of the slope causes the maximum wave run-up observed at the shore. A semi-analytical model, based on inviscid and irrotational theory, is used to investigate the wave generation process of a moving submerged object in a constant depth channel. The simplified geometry allows a variety of phenomena, observed during the experimental tests, to be investigated further in a more controlled setting. The variations in the growth, magnitude, and decay of energy as a function of time is due the interaction of the pressure distribution surrounding the moving slider with the wave field, in particular, the leading crest and trough. The largest energy transfer between slider kinetic energy and wave potential energy occurs when there is prolonged interaction between the slider's low pressure region and the leading wave trough. The generation of onshore propagating waves by a decelerating landslide is confirmed, and the magnitude of the maximum wave run-up is found to be dependent on the magnitude of the slider deceleration. The model also shows that slides with Froude number close to unity convert substantial amounts of energy into offshore propagating waves. The onshore propagating wave potential energy is not as sensitive to Froude number. A further result from the model simulations is that the specific shape of the slider has only a minor influence on the wave response, provided the slider's length and area are known. A boundary element model, based on inviscid and irrotational theory, is used to simulate the laboratory experiments. Model predictions of the wave field are generally accurate, particularly the magnitude and range of wave amplitudes within the wave packet, the arrival time of the wave group, the amplitude of the run-up and run-down at the shore, the time the maximum run-down occurs, and the form and magnitude of the wave potential energy time history. The ratios of maximum wave potential energy to maximum slider kinetic energy are predicted to within ± 29%. The model predictions of the crest arrival times are within 3.6% of the measured times. The inability of the inviscid and irrotational model to simulate the flow separation and wake motions lead to a 45% under prediction of the maximum fluid kinetic energy. Both the semi-analytical and BEM models highlight the need for the correct specification of initial slider accelerations in numerical simulations in order to accurately predict the wave energy.
172

Privacy preservation for training datasets in database: application to decision tree learning

Fong, Pui Kuen 15 December 2008 (has links)
Privacy preservation is important for machine learning and datamining, but measures designed to protect private information sometimes result in a trade off: reduced utility of the training samples. This thesis introduces a privacy preserving approach that can be applied to decision-tree learning, without concomitant loss of accuracy. It describes an approach to the preservation of privacy of collected data samples in cases when information of the sample database has been partially lost. This approach converts the original sample datasets into a group of unreal datasets, where an original sample cannot be reconstructed without the entire group of unreal datasets. This approach does not perform well for sample datasets with low frequency, or when there is low variance in the distribution of all samples. However, this problem can be solved through a modified implementation of the approach introduced later in this thesis, by using some extra storage.
173

The mat sat on the cat : investigating structure in the evaluation of order in machine translation

McCaffery, Martin January 2017 (has links)
We present a multifaceted investigation into the relevance of word order in machine translation. We introduce two tools, DTED and DERP, each using dependency structure to detect differences between the structures of machine-produced translations and human-produced references. DTED applies the principle of Tree Edit Distance to calculate edit operations required to convert one structure into another. Four variants of DTED have been produced, differing in the importance they place on words which match between the two sentences. DERP represents a more detailed procedure, making use of the dependency relations between words when evaluating the disparities between paths connecting matching nodes. In order to empirically evaluate DTED and DERP, and as a standalone contribution, we have produced WOJ-DB, a database of human judgments. Containing scores relating to translation adequacy and more specifically to word order quality, this is intended to support investigations into a wide range of translation phenomena. We report an internal evaluation of the information in WOJ-DB, then use it to evaluate variants of DTED and DERP, both to determine their relative merit and their strength relative to third-party baselines. We present our conclusions about the importance of structure to the tools and their relevance to word order specifically, then propose further related avenues of research suggested or enabled by our work.
174

A Framework for Generative Product Design Powered by Deep Learning and Artificial Intelligence : Applied on Everyday Products

Nilsson, Alexander, Thönners, Martin January 2018 (has links)
In this master’s thesis we explore the idea of using artificial intelligence in the product design process and seek to develop a conceptual framework for how it can be incorporated to make user customized products more accessible and affordable for everyone. We show how generative deep learning models such as Variational Auto Encoders and Generative Adversarial Networks can be implemented to generate design variations of windows and clarify the general implementation process along with insights from recent research in the field. The proposed framework consists of three parts: (1) A morphological matrix connecting several identified possibilities of implementation to specific parts of the product design process. (2) A general step-by-step process on how to incorporate generative deep learning. (3) A description of common challenges, strategies andsolutions related to the implementation process. Together with the framework we also provide a system for automatic gathering and cleaning of image data as well as a dataset containing 4564 images of windows in a front view perspective.
175

Human-informed robotic percussion renderings: acquisition, analysis, and rendering of percussion performances using stochastic models and robotics

Van Rooyen, Robert Martinez 19 December 2018 (has links)
A percussion performance by a skilled musician will often extend beyond a written score in terms of expressiveness. This assertion is clearly evident when comparing a human performance with one that has been rendered by some form of automaton that expressly follows a transcription. Although music notation enforces a significant set of constraints, it is the responsibility of the performer to interpret the piece and “bring it to life” in the context of the composition, style, and perhaps with a historical perspective. In this sense, the sheet music serves as a general guideline upon which to build a credible performance that can carry with it a myriad of subtle nuances. Variations in such attributes as timing, dynamics, and timbre all contribute to the quality of the performance that will make it unique within a population of musicians. The ultimate goal of this research is to gain a greater understanding of these subtle nuances, while simultaneously developing a set of stochastic motion models that can similarly approximate minute variations in multiple dimensions on a purpose-built robot. Live or recorded motion data, and algorithmic models will drive an articulated robust multi-axis mechatronic system that can render a unique and audibly pleasing performance that is comparable to its human counterpart using the same percussion instruments. By utilizing a non-invasive and flexible design, the robot can use any type of drum along with different types of striking implements to achieve an acoustic richness that would be hard if not impossible to capture by sampling or sound synthesis. The flow of this thesis will follow the course of this research by introducing high-level topics and providing an overview of related work. Next, a systematic method for gesture acquisition of a set of well-defined percussion scores will be introduced, followed by an analysis that will be used to derive a set of requirements for motion control and its associated electromechanical subsystems. A detailed multidiscipline engineering effort will be described that culminates in a robotic platform design within which the stochastic motion models can be utilized. An analysis will be performed to evaluate the characteristics of the robotic renderings when compared to human reference performances. Finally, this thesis will conclude by highlighting a set of contributions as well as topics that can be pursued in the future to advance percussion robotics. / Graduate / 2019-12-10
176

Large planetary data visualization using ROAM 2.0

Persson, Anders January 2005 (has links)
The problem of estimating an adequate level of detail for an object for a specific view is one of the important problems in computer 3d-graphics and is especially important in real-time applications. The well-known continuous level-of-detail technique, Real-time Optimally Adapting Meshes (ROAM), has been employed with success for almost 10 years but has at present, due to rapid development of graphics hardware, been found to be inadequate. Compared to many other level-of-detail techniques it cannot benefit from the higher triangle throughput available on graphics cards of today. This thesis will describe the implementation of the new version of ROAM (informally known as ROAM 2.0) for the purpose of massive planetary data visualization. It will show how the problems of the old technique can be bridged to be able to adapt to newer graphics card while still benefiting from the advantages of ROAM. The resulting implementation that is presented here is specialized on spherical objects and handles both texture and geometry data of arbitrary large sizes in an efficient way.
177

Dynamic and Static Approaches for Glyph-Based Visualization of Software Metrics

Majid, Raja January 2008 (has links)
This project presents the research on software visualization techniques. We will introduce the concepts of software visualization, software metrics and our proposed visualization techniques: Static Visualization (glyphs object with static texture) and Dynamic Visualization (glyphs object with moving object). Our intent to study the existing visualization techniques for visualization of software metrics and then proposed the new visualization approach that is more time efficient and easy to perceive by viewer. In this project, we focus on the practical aspects of visualization of multivariate dataset. This project also gives an implementation of proposed visualization techniques of software metrics. In this research based work, we have to compare practically the proposed visualization approaches. We will discuss the software development life cycle of our proposed visualization system, and we will also describe the complete software implementation of implemented software.
178

Contribution à la reconstruction de surfaces complexes à partir d'un grand flot de données non organisées pour la métrologie 3D. / Contribution to complex surfaces reconstruction from large and unorganized datasets for 3D metrology.

El hayek, Nadim 18 December 2014 (has links)
Les surfaces complexes ont des applications dans divers domaines tels que ceux de la photonique, de l'énergie, du biomédical, du transport... Par contre, elles posent de véritables défis quant à leur spécification, fabrication et mesure ainsi que lors de l'évaluation de leur défaut de forme. Les processus de fabrication et de mesure de surfaces complexes sont fortement tributaires des dimensions, des tolérances et des formes spécifiées. Afin de rendre exploitable les informations données par le système de mesure, une étape importante de traitement s'impose. Il s'agit ici de la reconstruction de surfaces afin de reconstituer la géométrie et la topologie de la surface sous-jacente et d'en extraire les informations nécessaires pour des besoins de métrologie dimensionnelle (caractéristiques dimensionnelles et évaluation des défauts de forme). Dans la catégorie des surfaces asphériques pour lesquelles un modèle mathématique est associé, le processus de traitement de données géométriques, non nécessairement organisées, se fait par l'association du modèle aux données. Les résidus d'association recherchés en optique sont typiquement de l'ordre du nanomètre. Dans ce cadre, nous proposons l'utilisation de l'algorithme L-BFGS qui n'a encore jamais été utilisé en métrologie. Ce dernier permet de résoudre des problèmes d'optimisation non-linéaires, sans contraintes et d'une manière robuste, automatique et rapide. La méthode L-BFGS reste efficace pour des données contenant plusieurs millions de points. Dans la catégorie des surfaces gauches et notamment des aubes de turbines, la fabrication, la mesure et le traitement sont à une toute autre échelle, sub-micrométrique. Les surfaces gauches ne sont généralement pas définies par un modèle mathématique mais sont représentées par des modèles paramétriques de type B-Spline et/ou NURBS. Dans ce cadre, nous exposons un état de l'art détaillé et proposons une nouvelle approche itérative d'association B-Spline. L'algorithme s'affranchit de tous les problèmes liés à l'initialisation et au paramétrage initial. Par conséquent, un tel algorithme constitue une nouveauté dans ce domaine. Nous établissons une étude approfondie en évoquant les avantages et les limites actuelles de cette approche sur des exemples de courbes fermées en 2D. Nous complétons ensuite cette étude par des perspectives d'amélioration et de généralisation aux surfaces en 3D. / Complex surfaces exhibit real challenges in regard to their design specification, their manufacturing, their measurement and the evaluation of their manufacturing defects. They are classified according to their geometric/shape complexity as well as to their required tolerance. Thus, the manufacturing and measurement processes used are selected accordingly. In order to transcribe significant information from the measured data, a data processing scheme is essential. Here, processing involves surface reconstruction in the aim of reconstituting the underlying geometry and topology to the points and extracting the necessary metrological information (form and/or dimensional errors). For the category of aspherical surfaces, where a mathematical model is available, the processing of the data, which are not necessarily organized, is done by fitting/associating the aspherical model to the data. The sought precision in optics is typically nanometric. In this context, we propose the L-BFGS optimization algorithm, first time used in metrological applications and which allows solving unconstrained, non-linear optimization problems precisely, automatically and fast. The L-BFGS method remains efficient and performs well even in the presence of very large amounts of data.In the category of general freeform surfaces and particularly turbine blades, the manufacturing, measurement and data processing are all at a different scale and require sub-micrometric precision. Freeform surfaces are generally not defined by a mathematical formula but are rather represented using parametric models such as B-Splines and NURBS. We expose a detailed state-of-the-art review of existing reconstruction algorithms in this field and then propose a new active contour deformation of B-Splines approach. The algorithm is independent of problems related to initialization and initial parameterization. Consequently, it is a new algorithm with promising results. We then establish a thorough study and a series of tests to show the advantages and limitations of our approach on examples of closed curves in the plane. We conclude the study with perspectives regarding improvements of the method and its extension to surfaces in 3D.
179

Modélisation hydraulique à surface libre haute-résolution : utilisation de données topographiques haute-résolution pour la caractérisation du risque inondation en milieux urbains et industriels / High-resolution modelling with bi-dimensional shallow water equations based codes : high-resolution topographic data use for flood hazard assessment over urban and industrial environments

Abily, Morgan 11 December 2015 (has links)
Pour l'évaluation du risque inondation, l’emploi de modèles numériques 2D d’hydraulique à surface libre reposant sur la résolution des équations de Saint-Venant est courant. Ces modèles nécessitent entre autre la description de la topographie de la zone d’étude. Sur des secteurs urbains denses ou des sites industriels, cette topographie complexe peut être appréhendée de plus en plus finement via des technologies dédiées telles que le LiDAR et la photogrammétrie. Les Modèles Numériques d'Elévation Haute Résolution (HR MNE) générés à partir de ces technologies, deviennent employés dans les études d’évaluation du risque inondation. Cette thèse étudie les possibilités, les avantages et les limites, liées à l'intégration des données topographiques HR en modélisation 2D du risque inondation en milieux urbains et industriels. Des modélisations HR de scénarios d'inondation d'origines pluviale ou fluviale sont testés en utilisant des HR MNE crées à partir de données LiDAR et photo-interprétées. Des codes de calculs (Mike 21, Mike 21 FM, TELEMAC-2D, FullSWOF_2D) offrant des moyens différent d'intégration de la donnée HR et basés sur des méthodes numériques variées sont utilisés. La valeur ajoutée de l'intégration des éléments fins du sur-sol impactant les écoulements est démontrée. Des outils pour appréhender les incertitudes liées à l'emploi de ces données HR sont développés et une analyse globale de sensibilité est effectuée. Les cartes d'indices de sensibilité (Sobol) produites soulignent et quantifient l'importance des choix du modélisateur dans la variance des résultats des modèles d'inondation HR ainsi que la variabilité spatiale de l'impact des paramètres incertains testés. / High Resolution (infra-metric) topographic data, including LiDAR photo-interpreted datasets, are becoming commonly available at large range of spatial extent, such as municipality or industrial site scale. These datasets are promising for High-Resolution (HR) Digital Elevation Model (DEM) generation, allowing inclusion of fine aboveground structures that influence overland flow hydrodynamic in urban environment. DEMs are one key input data in Hydroinformatics to perform free surface hydraulic modelling using standard 2D Shallow Water Equations (SWEs) based numerical codes. Nonetheless, several categories of technical and numerical challenges arise from this type of data use with standard 2D SWEs numerical codes. Objective of this thesis is to tackle possibilities, advantages and limits of High-Resolution (HR) topographic data use within standard categories of 2D hydraulic numerical modelling tools for flood hazard assessment purpose. Concepts of HR topographic data and 2D SWE based numerical modelling are recalled. HR modelling is performed for : (i) intense runoff and (ii) river flood event using LiDAR and photo-interpreted datasets. Tests to encompass HR surface elevation data in standard modelling tools ranges from industrial site scale to a megacity district scale (Nice, France). Several standard 2D SWEs based codes are tested (Mike 21, Mike 21 FM, TELEMAC-2D, FullSWOF_2D). Tools and methods for assessing uncertainties aspects with 2D SWE based models are developed to perform a spatial Global Sensitivity Analysis related to HR topographic data use. Results show the importance of modeller choices regarding ways to integrate the HR topographic information in models.
180

Redukce šumu audionahrávek pomocí hlubokých neuronových sítí / Audio noise reduction using deep neural networks

Talár, Ondřej January 2017 (has links)
The thesis focuses on the use of deep recurrent neural network, architecture Long Short-Term Memory for robust denoising of audio signal. LSTM is currently very attractive due to its characteristics to remember previous weights, or edit them not only according to the used algorithms, but also by examining changes in neighboring cells. The work describes the selection of the initial dataset and used noise along with the creation of optimal test data. For network training, the KERAS framework for Python is selected. Candidate networks for possible solutions are explored and described, followed by several experiments to determine the true behavior of the neural network.

Page generated in 0.0425 seconds