• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 17
  • 13
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

HEIGHT PROFILE MODELING AND CONTROL OF INKJET 3D PRINTING

Yumeng Wu (13960689) 14 October 2022 (has links)
<p>Among all additive manufacturing processes, material jetting, or inkjet 3D printing, builds the product similar to the traditional inkjet printing, either by drop-on-demand or continuous printing. Aside from the common advantages as other additive manufacturing methods, it can achieve higher resolution than other additive manufacturing methods. Combining its ability to accept a wide range of functional inks, inkjet 3D printing is predominantly used in pharmaceutical and biomedical applications. A height profile model is necessary to achieve better estimation of the geometry of a printed product. Numerical height profile models have been documented that can estimate the inkjet printing process from when the droplet hits the substrate till fully cured. Although they can estimate height profiles relatively accurately, these models generally take a long time to compute. A simplified model that can achieve sufficient accuracy while reducing computational complexity is needed for real-time process control. In this work, a layer-to-layer height propagation model that aims to balance computational complexity and model accuracy is proposed and experimentally validated. The model consists of two sub-models where one is dedicated to multi-layer line printing and the other is more broadly applicable for multi-layer 2D patterns. Both models predict the height profile of drops through separate volume and area layer-to-layer propagation. The layer-to-layer propagation is based on material flow and volume conservation. The models are experimentally validated on an experimental inkjet 3D printing system equipped with a heated piezoelectric dispenser head made by Microdrop. There are notable similarities between inkjet 3D printing and inkjet image printing, which has been studied extensively to improve color printing quality. Image processing techniques are necessary to convert nearly continuous levels of color intensities to binary printing map while satisfying the human visual system at the same time. It is reasonable to leverage such image processing techniques to improve the quality of inkjet 3D printed products, which might be more effective and efficient. A framework is proposed to adapt image processing techniques for inkjet 3D printing. Standard error diffusion method is chosen as a demonstration of the framework to be adapted for inkjet 3D printing and this adaption is experimentally validated. The adapted error diffusion method can improve the printing quality in terms of geometry integrity with low demand on computation power. Model predictive control has been widely used for process control in various industries. With a carefully designed cost function, model predictive control can be an effective tool to improve inkjet 3D printing. While many researchers utilized model predictive control to indirectly improves functional side of the printed products, geometry control is often overlooked. This is possibly due to the lack of high quality height profile models for inkjet 3D printing for real-time control. Height profile control of inkjet 3D printing can be formulated as a constrained non-linear model predictive control problem. The input to the printing system is always constrained, as droplet volume not only is bounded but also cannot be continuously adjusted due to the limitation of the printhead.  A specific cost function is proposed to account for the geometry of both the final printed product and the intermediate layers better. The cost function is further adjusted for the inkjet 3D printing system to reduce memory usage for larger print geometries by introducing sparse matrix and scaler cost weights. Two patterns with different parameter settings are simulated using model predictive controller. The simulated results show a consistent improvement over open-loop prints. Experimental validation is also performed on both a bi-level pattern and a P pattern, same as that printed with adapted error diffusion for inkjet 3D printing. The model predictive controlled printing outperforms the open-loop printing. In summary, a set of layer-to-layer height propagation profile models for inkjet 3D printing are proposed and experimentally validated. A framework to adapt error diffusion to improve inkjet 3D printing is proposed and validated experimentally. Model predictive control can also improve geometric integrity of inkjet 3D printing with a carefully designed cost function to address memory usage. It is also experimentally validated.</p>
22

Frequency and Damping Characteristics of Generators in Power Systems

Zou, Xiaolan 25 January 2018 (has links)
A power system stability is essential for maintaining the power system oscillation frequency within a small and acceptable interval around its nominal frequency. Hence, it is necessary to study and control the frequency for stable operation of a power system by knowing the characteristics within a power system. One approach is to understand the effectiveness of frequency and damping characteristics of generators in power systems. Hence, the simulation analysis of IEEE 118-bus power system is used for this study. The analysis includes theoretical analysis with a mathematical approach and simulation studies of swing equation to determine the characteristics of damped single-machine infinite bus, which is represented as a generator connects to a large network system with a small signal disturbance by line losses. Additionally, mathematical derivation of Prony analysis is presented in order to estimate the frequency and damping ratio of the simulation results. In the end, the results demonstrate that the frequency and damping characteristics of generators are highly dependent on the system inertia constant. Therefore, the higher inertia constant is a critical factor to ensure the system is more stable. / Master of Science / A power system’s stability is dependent on maintaining the oscillation frequency within a small and acceptable variance of its normal frequency. In order to control the frequency for the stable operation of a power system, it is necessary to study the characteristics within a power system. One approach is to study the effectiveness of frequency and damping characteristics of generators in power systems. For this study, the simulation analysis of the IEEE 118-bus power system will be used. This includes a mathematical approach and simulation studies of swing equation. These will determine the characteristics of damped single-machine infinite bus. This is represented as a generator connected to a large network system with a small signal disturbance caused by line losses. Additionally, the mathematical basis of Prony analysis is presented in order to estimate the frequency and damping ratio of the simulation results. In the end, the results demonstrate that the frequency and damping characteristics of generators are highly dependent on the system inertia constant. Therefore, a high inertia constant is critical to the stability of the system.
23

Décodeurs Haute Performance et Faible Complexité pour les codes LDPC Binaires et Non-Binaires / High Performance and Low Complexity Decoders for Binary and Non-Binary LDPC Codes

Li, Erbao 19 December 2012 (has links)
Cette thèse se consacre à l'étude de décodeurs itératifs, pour des codes correcteurd'erreurs binaires et non-binaires à faible densité (LDPC). Notre objectif est de modéliserdes décodeurs de complexité faibles et de faible latence tout en garantissantde bonne performances dans la région des très faibles taux d'erreur (error floor).Dans la première partie de cette thèse, nous étudions des décodeurs itératifssur des alphabets finis (Finite Alphabet iterative decoders, FAIDs) qui ont étérécemment proposés dans la littérature. En utilisant un grand nombre de décodeursFAIDs, nous proposons un nouvel algorithme de décodage qui améliore la capacité decorrections d'erreur des codes LDPC de degré dv = 3 sur canal binaire symétrique.La diversité des décodeurs permet de garantir une correction d'erreur minimale sousdécodage itératif, au-delà de la pseudo-distance des codes LDPC. Nous donnonsdans cette thèse un exemple detailé d'un ensemble de décodeur FAIDs, qui corrigetous les évènements d'erreur de poids inférieur ou égal à 7 avec un LDPC de petitetaille (N=155,K=64,Dmin=20). Cette approche permet de corriger des évènementsd'erreur que les décodeurs traditionnels (BP, min-sum) ne parviennent pas à corriger.Enfin, nous interprétons les décodeurs FAIDs comme des systèmes dynamiques etnous analysons les comportements de ces décodeurs sur des évènements d'erreur lesplus problématiques. En nous basant sur l'observation des trajectoires périodiquespour ces cas d'étude, nous proposons un algorithme qui combine la diversité dudécodage avec des sauts aléatoires dans l'espace d'état du décodeur itératif. Nousmontrons par simulations que cette technique permet de s'approcher des performancesd'un décodage optimal au sens du maximum de vraisemblance, et ce pourplusieurs codes.Dans la deuxième partie de cette thèse, nous proposons un nouvel algorithmede décodage à complexité réduite pour les codes LDPC non-binaires. Nous avonsappellé cet algorithme Trellis-Extended Min-Sum (T-EMS). En transformant le domainede message en un domaine appelée domaine delta, nous sommes capable dechoisir les déviations ligne par ligne par rapport à la configuration la plus fiable,tandis que les décodeurs habituels comme le décodeur EMS choisissent les déviationscolonne par colonne. Cette technique de sélection des déviations ligne parligne nous permet de réduire la complexité du décodage sans perte de performancepar rapport aux approches du type EMS. Nous proposons également d'ajouter une colonne supplémentaire à la représentation en treillis des messages, ce qui résoudle problème de latence des décodeurs existants. La colonne supplémentaire permetde calculer tous les messages extrinséque en parallèle, avec une implémentationmatérielle dédiée. Nous présentons dans ce manuscrit, aussi bien les architecturesmatérielles parallèle que les architectures matérielles série pour l'exécution de notrealgorithme T-EMS. L'analyse de la complexité montre que l'approche T-EMS estparticulièrement adapté pour les codes LDPC non-binaires sur des corps finis deGalois de petite et moyenne dimensions. / This thesis is dedicated to the study of iterative decoders, both for binary and non-binary low density parity check (LDPC) codes. The objective is to design low complexity and low latency decoders which have good performance in the error floor region.In the first part of the thesis, we study the recently introduced finite alphabet iterative decoders (FAIDs). Using the large number of FAIDs, we propose a decoding diversity algorithm to improve the error correction capability for binary LDPC codes with variable node degree 3 over binary symmetric channel. The decoder diversity framework allows to solve the problem of guaranteed error correction with iterative decoding, beyond the pseudo-distance of the LDPC codes. We give a detailed example of a set of FAIDs which corrects all error patterns of weight 7 or less on a (N=155,K=64,Dmin=20) short structured LDPC, while traditional decoders (BP, min-sum) fail on 5-error patterns. Then by viewing the FAIDs as dynamic systems, we analyze the behaviors of FAID decoders on chosen problematic error patterns. Based on the observation of approximate periodic trajectories for the most harmful error patterns, we propose an algorithm which combines decoding diversity with random jumps in the state-space of the iterative decoder. We show by simulations that this technique can approach the performance of Maximum LikelihoodDecoding for several codes.In the second part of the thesis, we propose a new complexity-reduced decoding algorithm for non-binary LDPC codes called trellis extended min sum (T-EMS). By transforming the message domain to the so-called delta domain, we are able to choose row-wise deviations from the most reliable configuration, while usual EMS-like decoders choose the deviations column-wise. This feature of selecting the deviations row-wise enables us to reduce the decoding complexity without any performance loss compared to EMS. We also propose to add an extra column to the trellis representation of the messages, which solves the latency issue of existing decoders. The extra column allows to compute all extrinsic messages in parallel, with a proper hardware implementation. Both the parallel and the serial hardware architectures for T-EMS are discussed. The complexity analysis shows that the T-EMS is especially suitable for high ratenon-binary LDPC codes on small and moderate fields.
24

Análise do jogo de futebol por sistemas dinâmicos categóricos / Soccer match analysis by categorical dynamic systems

Drezner, Rene 08 May 2014 (has links)
O presente trabalho teve o objetivo de elaborar um modelo de análise do jogo de futebol baseado em uma referência dinâmica de descrição do jogo. A elaboração deste modelo foi feita com base no modelo semântico de Lamas (2012) e na linguagem de descrição dos eventos de Seabra (2010). A partir da proposta de Lamas (2012) foi elaborado um modelo com maior resolução conceitual inspirado na categorização de tarefas de Bayer (1986), específico ao jogo de futebol. Depois disso, este modelo com maior resolução conceitual foi adaptado à linguagem de descrição dos eventos de Seabra (2010). Com o modelo adaptado foram elaboradas categorias de segmentos (classes de dinâmicas) a partir da decomposição do jogo em segmentos elementares e posterior reagrupamento destes em classes maiores que representam as transições entre as fases do modelo. A frequência de ocorrência destes segmentos foi o objeto de análise. Foram criadas três tipos de comparações entre classes a partir das frequências: classes principais, subclasses de penetração e subclasses de penetração que resultam em finalização. A aplicação do modelo na mesma unidade amostral de Seabra (2010) demonstrou que as classes de comparação são eficientes na discriminação da forma de jogar das equipes. Entretanto, o modelo final apresenta ainda muitas simplificações que diminuem seu potencial descritivo. Em virtude disto, ainda é necessário aprimora-lo para potencializar a descrição dos eventos jogo / The aim of this study was to create model soccer analysis based dynamic model. The preparation this model was based on semantic model of Lamas (2012) and events description language of Seabra (2010). A model with greater resolution inspired by the conceptual categorization tasks of Bayer (1986), specific to soccer game was elaborated from semantic model of Lamas (2012). After that, this conceptual model with higher resolution was adapted to events description language of Seabra (2010). From the adapted model, categories of segments (dynamic classes) were created from the decomposition of the game in elementary segments and subsequent reunification of these segments into larger segments classes that represent the transitions between the stages of the model. The frequency of these class was the central point of analyze. It was create three types of comparisons from class\' frequencies: main classes, penetration subclasses and finalization subclasses penetration. The application gave strong indications that the classes should be efficient in predicting the form of the teams playing. However, the final model has many simplifications that reduce its descriptive potential. For this reason, it is still necessary to improve the model to enhance the description of the game events
25

Análise do jogo de futebol por sistemas dinâmicos categóricos / Soccer match analysis by categorical dynamic systems

Rene Drezner 08 May 2014 (has links)
O presente trabalho teve o objetivo de elaborar um modelo de análise do jogo de futebol baseado em uma referência dinâmica de descrição do jogo. A elaboração deste modelo foi feita com base no modelo semântico de Lamas (2012) e na linguagem de descrição dos eventos de Seabra (2010). A partir da proposta de Lamas (2012) foi elaborado um modelo com maior resolução conceitual inspirado na categorização de tarefas de Bayer (1986), específico ao jogo de futebol. Depois disso, este modelo com maior resolução conceitual foi adaptado à linguagem de descrição dos eventos de Seabra (2010). Com o modelo adaptado foram elaboradas categorias de segmentos (classes de dinâmicas) a partir da decomposição do jogo em segmentos elementares e posterior reagrupamento destes em classes maiores que representam as transições entre as fases do modelo. A frequência de ocorrência destes segmentos foi o objeto de análise. Foram criadas três tipos de comparações entre classes a partir das frequências: classes principais, subclasses de penetração e subclasses de penetração que resultam em finalização. A aplicação do modelo na mesma unidade amostral de Seabra (2010) demonstrou que as classes de comparação são eficientes na discriminação da forma de jogar das equipes. Entretanto, o modelo final apresenta ainda muitas simplificações que diminuem seu potencial descritivo. Em virtude disto, ainda é necessário aprimora-lo para potencializar a descrição dos eventos jogo / The aim of this study was to create model soccer analysis based dynamic model. The preparation this model was based on semantic model of Lamas (2012) and events description language of Seabra (2010). A model with greater resolution inspired by the conceptual categorization tasks of Bayer (1986), specific to soccer game was elaborated from semantic model of Lamas (2012). After that, this conceptual model with higher resolution was adapted to events description language of Seabra (2010). From the adapted model, categories of segments (dynamic classes) were created from the decomposition of the game in elementary segments and subsequent reunification of these segments into larger segments classes that represent the transitions between the stages of the model. The frequency of these class was the central point of analyze. It was create three types of comparisons from class\' frequencies: main classes, penetration subclasses and finalization subclasses penetration. The application gave strong indications that the classes should be efficient in predicting the form of the teams playing. However, the final model has many simplifications that reduce its descriptive potential. For this reason, it is still necessary to improve the model to enhance the description of the game events
26

Application de la validation de données dynamiques au suivi de performance d'un procédé

Ullrich, Christophe 17 October 2010 (has links)
La qualité des mesures permettant de suivre l'évolution de procédés chimiques ou pétrochimiques peut affecter de manière significative leur conduite. Malheureusement, toute mesure est entachée d'erreur. Les erreurs présentes dans les données mesurées peuvent mener à des dérives significatives dans la conduite du procédé, ce qui peut avoir des effets néfastes sur la sécurité du procédé ou son rendement. La validation de données est une tâche très importante car elle transforme l'ensemble des données disponibles en un jeu cohérent de valeurs définissant l'état du procédé. La validation de données permet de corriger les mesures, d'estimer les valeurs des variables non mesurées et de calculer les incertitudes a posteriori de toutes les variables. À l'échelle industrielle, elle est régulièrement appliquée à des procédés fonctionnant en continu, représentés par des modèles stationnaires. Cependant, pour le suivi de phénomènes transitoires, les algorithmes de validation stationnaires ne sont plus efficaces. L'étude abordée dans le cadre de cette thèse est l'application de la validation de données dynamiques au suivi des performances des procédés chimiques. L'algorithme de validation de données dynamiques développé dans le cadre de cette thèse, est basé sur une résolution simultanée du problème d'optimisation et des équations du modèle. Les équations différentielles sont discrétisées par une méthode des résidus pondérés : les collocations orthogonales. L'utilisation de la méthode des fenêtres de temps mobiles permet de conserver un problème de dimension raisonnable. L'algorithme d'optimisation utilisé est un algorithme "Successive Quadratic Programming" à point intérieur. L'algorithme de validation de données dynamiques développé a permis la réduction de l'incertitude des estimées. Les exemples étudiés sont présentés du plus simple au plus complexe. Les premiers modèles étudiés sont des cuves de stockages interconnectées. Ce type de modèle est composé uniquement de bilans de matière. Les modèles des exemples suivants, des réacteurs chimiques, sont composés des bilans de matière et de chaleur. Le dernier modèle étudié est un ballon de séparation liquide vapeur. Ce dernier est composé de bilans de matière et de chaleur couplés à des phénomènes d'équilibre liquide-vapeur. L'évaluation de la matrice de sensibilité et du calcul des variances a posteriori a été étendue aux procédés représentés par des modèles dynamiques. Son application a été illustrée par plusieurs exemples. La modification des paramètres de fenêtre de validation influence la redondance dans celle-ci et donc le facteur de réduction de variances a posteriori. Les développements proposés dans ce travail offrent donc un critère rationnel de choix de la taille de fenêtre pour les applications de validation de données dynamiques. L'intégration d'estimateurs alternatifs dans l'algorithme permet d'en augmenter la robustesse. En effet, ces derniers permettent l'obtention d'estimées non-biaisées en présence d'erreurs grossières dans les mesures. Organisation de la thèse : La thèse débute par un chapitre introductif présentant le problème, les objectifs de la recherche ainsi que le plan du travail. La première partie de la thèse est consacrée à l'état de l'art et au développement théorique d'une méthode de validation de données dynamiques. Elle est organisée de la manière suivante : -Le premier chapitre est consacré à la validation de données stationnaires. Il débute en montrant le rôle joué par la validation de données dans le contrôle des procédés. Les différents types d'erreurs de mesure et de redondances sont ensuite présentés. Différentes méthodes de résolution de problèmes stationnaires linéaires et non linéaires sont également explicitées. Ce premier chapitre se termine par la description d'une méthode de calcul des variances a posteriori. -Dans le deuxième chapitre, deux catégories des méthodes de validation de données dynamiques sont présentées : les méthodes de filtrage et les méthodes de programmation non-linéaire. Pour chaque type de méthode, les principales formulations trouvées dans la littérature sont exposées avec leurs principaux avantages et inconvénients. -Le troisième chapitre est consacré au développement théorique de l'algorithme de validation de données dynamiques mis au point dans le cadre de cette thèse. Les différents choix stratégiques effectués y sont également présentés. L'algorithme choisi se base sur une formulation du problème d'optimisation comprenant un système d'équations algébro-différentielles. Les équations différentielles sont discrétisées au moyen d'une méthode de collocations orthogonales utilisant les polynômes d'interpolation de Lagrange. Différentes méthodes de représentation des variables d'entrée sont discutées. Afin de réduire les coûts de calcul et de garder un problème d'optimisation résoluble, la méthode des fenêtres de temps mobiles est utilisée. Un algorithme "Interior Point Sucessive Quadratic Programming" est utilisé afin de résoudre de manière simultanée les équations différentielles discrétisées et les équations du modèle. Les dérivées analytiques du gradient de la fonction objectif et du Jacobien des contraintes sont également présentées dans ce chapitre. Pour terminer, un critère de qualité permettant de comparer les différentes variantes de l'algorithme est proposé. -Cette première partie se termine par le développement d'un algorithme original de calcul des variances a posteriori. La méthode développée dans ce chapitre est similaire à la méthode décrite dans le premier chapitre pour les procédés fonctionnant de manière stationnaire. Le développement est réalisé pour les deux représentations des variables d'entrée discutées au chapitre 3. Pour terminer le chapitre, cette méthode de calcul des variances a posteriori est appliquée de manière théorique sur un petit exemple constitué d'une seule équation différentielle et d'une seule équation de liaison. La seconde partie de la thèse est consacrée à l'application de l'algorithme de validation de données dynamiques développé dans la première partie à l'étude de plusieurs cas. Pour chacun des exemples traités, l'influence des paramètres de l'algorithme sur la robustesse, la facilité de convergence et la réduction de l'incertitude des estimées est examinée. La capacité de l'algorithme à réduire l'incertitude des estimées est évaluée au moyen du taux de réduction d'erreur et du facteur de réduction des variances. -Le premier chapitre de cette deuxième partie est consacré à l'étude d'une ou plusieurs cuves de stockage à niveau variable, avec ou sans recyclage de fluide. Ce premier cas comporte uniquement des bilans de matière. - Le chapitre 6 examine le cas d'un réacteur à cuve agitée avec échange de chaleur. L'exemple traité dans ce chapitre est donc constitué de bilans de matière et d'énergie. -L'étude d'un ballon flash au chapitre 7 permet de prendre en compte les équilibres liquide-vapeur. - Le chapitre 8 est consacré aux estimateurs robustes dont la performance est comparée pour les exemples étudiés aux chapitres 5 et 6. La thèse se termine par un chapitre consacré à la présentation des conclusions et de quelques perspectives futures.
27

Dynamic System Modeling And State Estimation For Speech Signal

Ozbek, Ibrahim Yucel 01 May 2010 (has links) (PDF)
This thesis presents an all-inclusive framework on how the current formant tracking and audio (and/or visual)-to-articulatory inversion algorithms can be improved. The possible improvements are summarized as follows: The first part of the thesis investigates the problem of the formant frequency estimation when the number of formants to be estimated fixed or variable respectively. The fixed number of formant tracking method is based on the assumption that the number of formant frequencies is fixed along the speech utterance. The proposed algorithm is based on the combination of a dynamic programming algorithm and Kalman filtering/smoothing. In this method, the speech signal is divided into voiced and unvoiced segments, and the formant candidates are associated via dynamic programming algorithm for each voiced and unvoiced part separately. Individual adaptive Kalman filtering/smoothing is used to perform the formant frequency estimation. The performance of the proposed algorithm is compared with some algorithms given in the literature. The variable number of formant tracking method considers those formant frequencies which are visible in the spectrogram. Therefore, the number of formant frequencies is not fixed and they can change along the speech waveform. In that case, it is also necessary to estimate the number of formants to track. For this purpose, the proposed algorithm uses extra logic (formant track start/end decision unit). The measurement update of each individual formant trajectories is handled via Kalman filters. The performance of the proposed algorithm is illustrated by some examples The second part of this thesis is concerned with improving audiovisual to articulatory inversion performance. The related studies can be examined in two parts / Gaussian mixture model (GMM) regression based inversion and Jump Markov Linear System (JMLS) based inversion. GMM regression based inversion method involves modeling audio (and /or visual) and articulatory data as a joint Gaussian mixture model. The conditional expectation of this distribution gives the desired articulatory estimate. In this method, we examine the usefulness of the combination of various acoustic features and effectiveness of various types of fusion techniques in combination with audiovisual features. Also, we propose dynamic smoothing methods to smooth articulatory trajectories. The performance of the proposed algorithm is illustrated and compared with conventional algorithms. JMLS inversion involves tying the acoustic (and/or visual) spaces and articulatory space via multiple state space representations. In this way, the articulatory inversion problem is converted into the state estimation problem where the audiovisual data are considered as measurements and articulatory positions are state variables. The proposed inversion method first learns the parameter set of the state space model via an expectation maximization (EM) based algorithm and the state estimation is handled via interactive multiple model (IMM) filter/smoother.
28

Spatial analysis of sea level rise associated with climate change

Chang, Biao 20 September 2013 (has links)
Sea level rise (SLR) is one of the most damaging impacts associated with climate change. The objective of this study is to develop a comprehensive framework to identify the spatial patterns of sea level in the historical records, project regional mean sea levels in the future, and assess the corresponding impacts on the coastal communities. The first part of the study suggests a spatial pattern recognition methodology to characterize the spatial variations of sea level and to investigate the sea level footprints of climatic signals. A technique based on artificial neural network is proposed to reconstruct average sea levels for the characteristic regions identified. In the second part of the study, a spatial dynamic system model (DSM) is developed to simulate and project the changes in regional sea levels and sea surface temperatures (SST) under different development scenarios of the world. The highest sea levels are predicted under the scenario A1FI, ranging from 71 cm to 86 cm (relative to 1990 global mean sea level); the lowest predicted sea levels are under the scenario B1, ranging from 51 cm to 64 cm (relative to 1990 global mean sea level). Predicted sea levels and SST's of the Indian Ocean are significantly lower than those of the Pacific and the Atlantic Ocean under all six scenarios. The last part of this dissertation assesses the inundation impacts of projected regional SLR on three representative coastal U.S. states through a geographic information system (GIS) analysis. Critical issues in the inundation impact assessment process are identified and discussed.
29

Avaliação numérica e computacional do efeito de incertezas inerentes a sistemas mecânicos / Numerical and computational evaluation of the effect of uncertainties inherent the mechanical systems

Costa, Tatiane Nunes da 25 August 2016 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-09-28T13:05:06Z No. of bitstreams: 2 Dissertação - Tatiane Nunes da Costa - 2016.pdf: 5111300 bytes, checksum: 82d5b13d4c4d57e1f4850a62f149025c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-09-30T13:03:40Z (GMT) No. of bitstreams: 2 Dissertação - Tatiane Nunes da Costa - 2016.pdf: 5111300 bytes, checksum: 82d5b13d4c4d57e1f4850a62f149025c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-09-30T13:03:41Z (GMT). No. of bitstreams: 2 Dissertação - Tatiane Nunes da Costa - 2016.pdf: 5111300 bytes, checksum: 82d5b13d4c4d57e1f4850a62f149025c (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-08-25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Most of the time, modern problems of engineering are nonlinear and, may also be subject to certain types of uncertainty that can directly influence in the answers of a particular system. In this sense, the stochastic methods have been thoroughly studied in order to get the best settings for a given project. Out of the stochastic techniques, the Method of Monte Carlo stands out and, especially the Latin Hypercube Sampling (LHS) which is a simpler version of the same. For this type of modeling, the Stochastic Finite Elements Method (SFEM) is becoming more frequently used, given that, an important tool for the discretization of stochastic fields can be given by the Karhunèn-Loève (KL) expansion. In this work, the following three case studies will be used: A discrete system of 2 g.d.l., a continuous system of a coupled beam type both in linear and nonlinear springs and a rotor consisting of axis, bearings and disks. In this sense, the influence of uncertainties in the systems studied will be checked, using for this, the LHS, SFEM and the KL expansion. The stochastic study in question will be used in the construction of the great project for the rotor problem already presented. / Problemas modernos de engenharia, na maioria das vezes são não lineares e, podem também estar sujeitos a certos tipos de incertezas que podem influenciar diretamente nas respostas de um dado sistema. Nesse sentido, os métodos estocásticos têm sido exaustivamente estudados com o intuito de se obter as melhores configurações para um dado projeto. Dentre as técnicas estocásticas, destacam-se o Método de Monte Carlo e, principalmente o Método Hipercubo Latino (HCL) que é uma versão mais simples do mesmo. Para este tipo de modelagem, é cada vez mais utilizado o Método dos Elementos Finitos Estocásticos (MEFE), sendo que uma importante ferramenta para a discretização dos campos estocásticos pode ser dada pela expansão de Karhunèn-Loève (KL). Neste trabalho serão utilizados três estudos de casos, quais sejam: Um sistema discreto de 2 g.d.l., um sistema contínuo do tipo viga acoplada tanto em molas lineares quanto não lineares e um rotor composto por eixo, mancais e discos. Nesse sentido, será verificada a influência de incertezas nos sistemas estudados, utilizando para isto, o método HCL, MEFE e a expansão de KL. O estudo estocástico em questão será empregado na construção do projeto ótimo robusto para o problema do rotor já apresentado.
30

Contextual behavioural modelling and classification of vessels in a maritime piracy situation

Dabrowski, Joel Janek January 2014 (has links)
In this study, a method is developed for modelling and classifying behaviour of maritime vessels in a piracy situation. Prior knowledge is used to construct a probabilistic graphical model of maritime vessel behaviour. This model is a novel variant of a dynamic Bayesian network (DBN), that extends the switching linear dynamic system (SLDS) to accommodate contextual information. A generative model and a classifier model are developed. The purpose of the generative model is to generate simulated data by modelling the behaviour of fishing vessels, transport vessels and pirate vessels in a maritime piracy situation. The vessels move, interact and perform various activities on a predefined map. A novel methodology for evaluating and optimising the generative model is proposed. This methodology can easily be adapted to other applications. The model is evaluated by comparing simulation results with 2011 pirate attack reports. The classifier model classifies maritime vessels into predefined categories according to their behaviour. The classification is performed by inferring the class of a vessel as a fishing, transport or pirate vessel class. The classification method is evaluated by classifying the data generated by the generative model and comparing it to the true classes of the simulated vessels. / Thesis (PhD)--University of Pretoria, 2014. / tm2015 / Electrical, Electronic and Computer Engineering / PhD / Unrestricted

Page generated in 0.107 seconds