• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 25
  • 16
  • 12
  • 7
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 36
  • 32
  • 29
  • 26
  • 26
  • 24
  • 20
  • 19
  • 19
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Novel efficiency evaluation methods and analysis for three-phase induction machines

McKinnon, Douglas John, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2005 (has links)
This thesis describes new methods of evaluating the efficiency of three-phase induction machines using synthetic loading. Synthetic loading causes the induction machine to draw full-load current without the need to connect a mechanical load to the machine's drive shaft. The synthetic loading methods cause the machine to periodically accelerate and decelerate, producing an alternating motor-generator action. This action causes the machine, on average over each synthetic loading cycle, to operate at rated rms current, rated rms voltage and full-load speed, thereby producing rated copper losses, iron loss and friction and windage loss. The excitation voltages are supplied from a PWM inverter with a large capacity DC bus capable of supplying rated rms voltage. The synthetic loading methods of efficiency evaluation are verified in terms of the individual losses in the machine by using a new dynamic model that accounts for iron loss and all parameter variations. The losses are compared with the steady-state loss distribution determined using very accurate induction machine parameters. The parameters were identified using a run-up-to-speed test at rated voltage and the locked rotor and synchronous speed tests conducted with a variable voltage supply. The latter tests were used to synthesise the variations in stator leakage reactance, magnetising reactance and the equivalent iron loss resistance over the induction machine's speed range. The run-up-to-speed test was used to determine the rotor resistance and leakage reactance variations over the same speed range. The test method results showed for the first time that the rotor leakage reactance varied in the same manner as the stator leakage and magnetising reactances with respect to current. When all parameter variations are taken into account there is good agreement between theoretical and measured results for the synthetic loading methods. The synthetic loading methods are applied to three-phase induction machines with both single- and double-cage rotors to assess the effect of rotor parameter variations in the method. Various excitation waveforms for each method were used and the measured and modelled efficiencies compared to conventional efficiency test results. The results verify that it is possible to accurately evaluate the efficiency of three-phase induction machines using synthetic loading.
122

Self-healing RF SoCs: low cost built-in test and control driven simultaneous tuning of multiple performance metrics

Natarajan, Vishwanath 13 October 2010 (has links)
The advent of deep submicron technology coupled with ever increasing demands from the customer for more functionality on a compact silicon real estate has led to a proliferation of highly complex integrated RF system-on-chip (SoC) and system-on-insulator (SoI) solutions. The use of scaled CMOS technologies for high frequency wireless applications is posing daunting technological challenges both in design and manufacturing test. To ensure market success, manufacturers need to ensure the quality of these advanced RF devices by subjecting them to a conventional set of production test routines that are both time consuming and expensive. Typically the devices are tested for parametric specifications such as gain, linearity metrics, quadrature mismatches, phase noise, noise figure (NF) and end-to-end system level specifications such as EVM (error vector magnitude), BER (bit-error-rate) etc. Due to the reduced visibility imposed by high levels of integration, testing for parametric specifications are becoming more and more complex. To offset the yield loss resulting from process variability effects and reliability issues in RF circuits, the use of self-healing/self-tuning mechanisms will be imperative. Such self-healing is typically implemented as a test/self-test and self-tune procedure and is applied post-manufacture. To enable this, simple test routines that can accurately diagnose complex performance parameters of the RF circuits need to be developed first. After diagnosing the performance of a complex RF system appropriate compensation techniques need to be developed to increase or restore the system performance. Moreover, the test, diagnosis and compensation approach should be low-cost with minimal hardware and software overhead to ensure that the final product is economically viable for the manufacturer. The main components of the thesis are as follows: 1) Low-cost specification testing of advanced radio frequency front-ends: Methodologies are developed to address the issue of test cost and test time associated with conventional production testing of advanced RF front-ends. The developed methodologies are amenable for performing self healing of RF SoCs. Test generation algorithms are developed to perform alternate test stimulus generation that includes the artifacts of test signal path such as response capture accuracy, load-board DfT etc. A novel cross loop-back methodology is developed to perform low cost system level specification testing of multi-band RF transceivers. A novel low-cost EVM testing approach is developed for production testing of wireless 802.11 OFDM front-ends. A signal transformation based model extraction technique is developed to compute multiple RF system level specifications of wireless front-ends from a single data capture. The developed techniques are low-cost and facilitate a reduction in the overall contribution of test cost towards the manufacturing cost of advanced wireless products. 2)Analog tuning methodologies for compensating wireless RF front ends: Methodologies for performing low-cost self tuning of multiple impairments of wireless RF devices are developed. This research considers for the first time, multiple analog tuning parameters of a complete RF transceiver system (transmitter and receiver) for tuning purposes. The developed techniques are demonstrated on hardware components and behavioral models to improve the overall yield of integrated RF SoCs.
123

On Ill-Posedness and Local Ill-Posedness of Operator Equations in Hilbert Spaces

Hofmann, B. 30 October 1998 (has links) (PDF)
In this paper, we study ill-posedness concepts of nonlinear and linear inverse problems in a Hilbert space setting. We define local ill-posedness of a nonlinear operator equation $F(x) = y_0$ in a solution point $x_0$ and the interplay between the nonlinear problem and its linearization using the Frechet derivative $F\acent(x_0)$ . To find an appropriate ill-posedness concept for the linarized equation we define intrinsic ill-posedness for linear operator equations $Ax = y$ and compare this approach with the ill-posedness definitions due to Hadamard and Nashed.
124

Globale Abschätzung akustischer Wandadmittanzen in Innenräumen mittels inverser Verfahren

Anderssohn, Robert 12 March 2014 (has links) (PDF)
Für die Optimierung akustischer Eigenschaften von Räumen ist die Verbesserung deren numerischer Simulationen von entscheidender Bedeutung. Im unteren Frequenzbereich hängt in vielen Fällen die Qualität der Lösungen stark von der Kenntnis akustischer Wandadmittanzen ab. Die vorliegende Arbeit umfasst die Entwicklung und Untersuchung verschiedener auf deterministischen Diskretisierungen des akustischen Randwertproblems basierender Formulierungen zur globalen Bestimmung frequenzabhängiger Admittanzparameter. Mit Admittanzen kann das Reflexions- und Absorptionsverhalten von Wänden quantifiziert werden. Der vorgestellte Ansatz der globalen Admittanzbestimmung in Innenräumen ermöglicht die Berücksichtigung schrägen Schalleinfalls. Die Methode sieht ein Experiment vor, bei dem das Schallfeld mit Mikrofonen abgetastet, alle vorhandenen Schallquellen bestimmt sowie die Geometrie des akustischen Raumes erfasst werden. Mit den in der Arbeit entwickelten Algorithmen wird eine globale Admittanzverteilung für den gesamten Rand aus diesen Daten berechnet. Mit Hilfe erfolgreich identifizierter Admittanzverläufe sollen Simulationen niederfrequenter Wellenausbreitungen in Räumen auch komplizierter Geometrien und Oberflächenbeschaffenheiten durch Hinzunahme von Admittanzrandbedingungen ermöglicht und verbessert werden. Die Bestimmung von Wandadmittanzen aus partiell bekannten Schalldruckwerten wird mathematisch als inverses Problem eingeordnet. Für die inversen Algorithmen werden die Methoden der Randelemente (BE) und der finiten Elemente (FE) zur Diskretisierung des akustischen Randwertproblems verwendet. Aus den Gleichungen der BE-Diskretisierung lässt sich ein schlecht konditioniertes, aber dafür lineares Gleichungssystem für das inverse Problem finden, während die FE-basierte Formulierung ein nichtlineares, aufgrund der Komplexität des Problems meist ebenfalls schlecht konditioniertes Optimierungsproblem darstellt. Ein wesentliches Ergebnis dieser Arbeit ist die Gegenüberstellung der linearen und nichtlinearen Algorithmen des inversen Problems in Hinblick auf deren Herleitungen, die umgesetzten Berechnungsverfahren und der sich stark unterscheidenden Lösungsqualitäten. Untersuchungen der Admittanzrekonstruktion an zwei- und dreidimensionalen theoretischen Modellen verdeutlichen die Einflüsse der Modellgenauigkeit, des Messumfanges und des Messrauschens auf die Ergebnisse der inversen Algorithmen. Anhand der Anwendung auf Messdaten eines bei Brüel & Kjaer durchgeführten Experimentes wird das inverse Verfahren der globalen Admittanzbestimmung einem Praxistest unterzogen. / Reflection and absorption of sound waves on boundaries play a determining role for the optimization of acoustical properties in closed rooms. Above all the geometry and dynamic behavior of the wall structure are responsible for it. These boundary terms are quantifiable within the scope of numerical acoustics by the so-called admittance boundary conditions of the acoustical boundary value problem. Especially at low frequencies the quality of acoustical simulation depends strongly on the recognition of boundary admittances. The present work includes the development of two different inverse algorithms based on deterministic discretization methods for the global determination of frequency-dependent boundary admittance parameters. The approach of global determination of admittances allows to take account for non-perpendicular wave incident. For the method to work an experiment shall be initially conducted. In that process all present sound sources and microphone arrays scanning the sound field must be located and measured and a model of the geometry of the room needs to be created. The developed algorithms calculate then a global admittance distribution based on this data. Using successfully identified admittance characteristics as admittance boundary condition, low frequency simulation in rooms of complex geometry and arbitrary consistency of the surface shall be improved. Identifying boundary admittances out of partially measured sound pressure data is classifiable as inverse acoustic problem. In order to develop inverse formulations the acoustical boundary value problem is discretized by means of the Boundary Element and the Finite Element Method. The inverse formulation of the Boundary Element equations composes an ill-posed but linear system of equations. In contrast, based on Finite Elements only a nonlinear optimization problem can be set up that often features a bad condition due to the complexity of the inverse problem. The comparison of these linear and nonlinear algorithms of the inverse acoustic problem of global determination of boundary admittances in respect of derivation, implemented solution techniques and differing solution qualities states an essential result of this work. The investigation of admittance reconstruction at two and three-dimensional theoretical models reveal the influences of model accuracy, measurement expense and noise on measured data onto the results of both inverse algorithms. Finally, the problem of global admittance determination is subjected to experimentally obtained data (at Brüel & Kjaer) in order to check for practical applicability.
125

Investigação sobre procedimentos de identificação de cargas axiais em dutos submersos a partir de respostas vibratórias / Investigation of a procedure for the identification of axial loads applied to a submerged beam by using vibration response

Kitatani Júnior, Sigeo 31 July 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In the present thesis it is proposed and evaluated, both numerically and experimentally, an inverse procedure for the indirect determination of axial loads applied to submersed pipe-like structures, based on their dynamic responses. The investigation is motivated by the existence of practical problems encountered in the oil industry. An experimental bench has been designed and built, consisting in a reservoir inside which a tubular stainless steel beam has been mounted and tested. Special fixtures have been designed in such a way to enable to apply controlled axial loads and represent different types of boundary conditions. In parallel, computational routines have been developed for the two-dimensional modeling of the structure accounting for the effects of axial loads, flexible supports and fluid-structure interaction, based on the finite element approach. Having in mind the difficulties which are expected to be encountered when the methodology be applied in real conditions, some special dynamic test procedures have been considered, including Operational Modal Analysis (OMA), which enables to identify modal parameters from output-only measurements. Numerous scenarios have been considered using either numerically simulated or experimentally measured responses. As for the resolution of the inverse problem, two strategies have been investigated: the first consists in the deterministic resolution of a constrained optimization problem based on evolutionary algorithms, and the second, which enables to account for the presence of uncertainties in the experimental data, is a stochastic approach based on Bayesian inference, combined with Markov chains and Metropolis-Hastings algorithm. The results obtained confirm the operational feasibility and satisfactory accuracy provided by the suggested identification approaches. / Na presente tese de doutorado é proposto e avaliado numérica e experimentalmente um procedimento inverso para determinação indireta de carregamentos axiais aplicados a estruturas tubulares submersas a partir de suas respostas dinâmicas. A investigação é motivada pela existência de problemas práticos evidenciados pelo setor de tecnologia submarina da indústria petrolífera. Nesta proposta, as cargas axiais, que na prática não podem ser medidas diretamente, são identificadas através da resolução de um problema inverso, formulado como um problema de otimização, a partir das respostas dinâmicas da estrutura. Uma bancada experimental foi projetada e construída, composta de um reservatório dentro do qual foi ensaiado um tubo metálico de seção circular. Mecanismos de fixação e aplicação de carga à estrutura foram especialmente projetados de modo a permitir consideração de dois tipos diferentes de condição de contorno. Paralelamente, rotinas computacionais foram desenvolvidas para a modelagem numérica bidimensional da estrutura incluindo os efeitos de interação fluido-estrutura e das cargas axiais, com base no Método de Elementos Finitos. Tendo em vista o objetivo da aplicação da metodologia proposta em situações práticas, as quais envolvem dificuldades de execução de ensaios em ambientes submarinos, foram investigados procedimentos de ensaios dinâmicos especialmente adaptados a estas condições. Com este intuito, foi analisado o emprego da técnica de análise modal experimental denominada OMA (Operational Modal Analysis), que permite obter os parâmetros modais sem conhecimento das forças de excitação da estrutura. Numerosos cenários de identificação foram estudados utilizando tanto respostas dinâmicas simuladas numericamente, quanto respostas medidas experimentalmente. Visando considerar a influência de incertezas nos dados experimentais, o problema de identificação da carga axial também foi tratado utilizando uma abordagem estocástica, com base em inferência bayesiana, a partir da simulação de cadeias de Markov, associada ao algoritmo Metropolis-Hastings. Os resultados obtidos atestam a viabilidade operacional e a precisão satisfatória do procedimento de identificação proposto. / Doutor em Engenharia Mecânica
126

Identification de la conductivité hydraulique pour un problème d'intrusion saline : Comparaison entre l'approche déterministe et l'approche stochastique / Identification of hydraulic conductivity for a seawater intrusion problem : Comparison between the deterministic approach and the stochastic approach

Mourad, Aya 12 December 2017 (has links)
Le thème de cette thèse est l'identification de paramètres tels que la conductivité hydraulique, K, pour un problème d'intrusion marine dans un aquifère isotrope et libre. Plus précisément, il s'agit d'estimer la conductivité hydraulique en fonction d'observations ou de mesures sur le terrain faites sur les profondeurs des interfaces (h, h₁), entre l'eau douce et l'eau salée et entre le milieu saturé et la zone insaturée. Le problème d'intrusion marine consiste en un système à dérivée croisée d'edps de type paraboliques décrivant l'évolution de h et de h₁. Le problème inverse est formulé en un problème d'optimisation où la fonction coût minimise l'écart quadratique entre les mesures des profondeurs des interfaces et celles fournies par le modèle. Nous considérons le problème exact comme une contrainte pour le problème d'optimisation et nous introduisons le Lagrangien associé à la fonction coût. Nous démontrons alors que le système d'optimalité a au moins une solution, les princcipales difficultés étant de trouver le bon ensemble pour les paramètres admissibles et de prouver la différentiabilité de l'application qui associe (h(K), h₁(K₁)) à K. Ceci constitue le premier résultat de la thèse. Le second résultat concerne l'implémentation numérique du problème d'optimisation. Notons tout d'abord que, concrètement, nous ne disposons que d'observations ponctuelles (en espace et en temps) correspondant aux nombres de puits de monitoring. Nous approchons donc la fonction coût par une formule de quadrature qui est ensuite minimisée en ultilisant l'algorithme de la variable à mémoire limitée (BLMVM). Par ailleurs, le problème exact et le problème adjoint sont discrétisés en espace par une méthode éléments finis P₁-Lagrange combinée à un schéma semi-implicite en temps. Une analyse de ce schéma nous permet de prouver qu'il est d'ordre 1 en temps et en espace. Certains résultats numériques sont présentés pour illustrer la capacité de la méthode à déterminer les paramètres inconnus. Dans la troisième partie de la thèse, nous considérons la conductivité hydraulique comme un paramètre stochastique. Pour réaliser une étude numérique rigoureuse des effets stochastiques sur le problème d'intrusion marine, nous utilisons les développements de Wiener pour tenir compte des variables aléatoires. Le système initiale est alors transformé en une suite de systèmes déterministes qu'on résout pour chaque coefficient stochastique du développement de Wiener. / This thesis is concerned with the identification, from observations or field measurements, of the hydraulic conductivity K for the saltwater intrusion problem involving a nonhomogeneous, isotropic and free aquifer. The involved PDE model is a coupled system of nonlinear parabolic equations completed by boudary and initial conditions, as well as compatibility conditions on the data. The main unknowns are the saltwater/freshwater interface depth and the elevation of upper surface of the aquifer. The inverse problem is formulated as the optimization problem where the cost function is a least square functional measuring the discrepancy between experimental interfaces depths and those provided by the model. Considering the exact problem as a constraint for the optimization problem and introducing the Lagrangian associated with the cost function, we prove that the optimality system has at least one solution. The main difficulties are to find the set of all eligible parameters and to prove the differentiability of the operator associating to the hydraulic conductivity K, the state variables (h, h₁). This is the first result of the thesis. The second result concerns the numerical implementation of the optimization problem. We first note that concretely, we only have specific observations (in space and in time) corresponding to the number of monitoring wells, we then adapt the previous results to the case of discrete observations data. The gradient of the cost function is computed thanks to an approximate formula in order to take into account the discrete observations data. The cost functions then is minimized by using a method based on BLMVM algorithm. On the other hand, the exact problem and the adjoint problem are discretized in space by a P₁-Lagrange finite element method combined with a semi-implicit time discretization scheme. Some numerical results are presented to illustrate the ability of the method to determine the unknown parameters. In the third part of the thesis we consider the hydraulic conductivity as a stochastic parameter. To perform a rigorous numerical study of stochastic effects on the saltwater intrusion problem, we use the spectral decomposition and the stochastic variational problem is reformulated to a set of deterministic variational problems to be solved for each Wiener polynomial chaos.
127

Identification paramétrique en boucle fermée par une commande optimale basée sur l’analyse d’observabilité / Closed loop parameter identification based on the design of optimal control and the observability analysis

Qian, Jun 14 September 2015 (has links)
Dans un objectif conjoint d'identification paramétrique en ligne, les méthodes développées dans cette thèse permettent de concevoir en ligne et en boucle fermée les entrées optimales qui enrichissent les informations contenues dans l'expérience en cours. Ces méthodes reposent sur des mesures en temps réel du procédé, sur un modèle dynamique non linéaire (ou linéaire) multi-variable choisi du procédé, sur un modèle de sensibilité des mesures par rapport aux paramètres à estimer et sur un observateur non linéaire. L'analyse de l'observabilité et des techniques de commande prédictive permettent de définir la commande optimale qui est déterminée en ligne par optimisation sous contraintes. Des aspects de stabilisation sont également étudiés (via un apport de contraintes fictives ou via une technique de Lyapunov). Enfin, une loi de commande explicite pour le cas particulier du système d'ordre un est développée. Des exemples illustratifs sont traités via le logiciel ODOE4OPE : un bioréacteur, un réacteur continu parfaitement agité et une aile delta. Ces exemples permettent de voir que l'estimation des paramètres peut être réalisée avec une bonne précision, et à moindre coût expérimental en une expérience / For online parameter identification, the developed methods here allow to design online and in closed loop optimal inputs that enrich the information in the current experience. These methods are based on real-time measurements of the process, on a dynamic nonlinear (or linear) multi-variable model, on a sensitivity model of measurements with respect to the parameters to be estimated and a nonlinear observer. Analysis of observability and predictive control techniques are used to define the optimal control which is determined online by constrained optimization. Stabilization aspects are also studied (by adding fictitious constraints or by a Lyapunov technique). Finally, for the particular case of a first order linear system, the explicit control law is developed. Illustrative examples are processed via the ODOE4OPE software : a bio-reactor, a continuous stirred tank reactor and a delta wing. These examples help to see that the parameter estimation can be performed with good accuracy in a single and less costly experiment
128

First principles and black box modelling of biological systems

Grosfils, Aline 13 September 2007 (has links)
Living cells and their components play a key role within biotechnology industry. Cell cultures and their products of interest are used for the design of vaccines as well as in the agro-alimentary field. In order to ensure optimal working of such bioprocesses, the understanding of the complex mechanisms which rule them is fundamental. Mathematical models may be helpful to grasp the biological phenomena which intervene in a bioprocess. Moreover, they allow prediction of system behaviour and are frequently used within engineering tools to ensure, for instance, product quality and reproducibility.<p> <p>Mathematical models of cell cultures may come in various shapes and be phrased with varying degrees of mathematical formalism. Typically, three main model classes are available to describe the nonlinear dynamic behaviour of such biological systems. They consist of macroscopic models which only describe the main phenomena appearing in a culture. Indeed, a high model complexity may lead to long numerical computation time incompatible with engineering tools like software sensors or controllers. The first model class is composed of the first principles or white box models. They consist of the system of mass balances for the main species (biomass, substrates, and products of interest) involved in a reaction scheme, i.e. a set of irreversible reactions which represent the main biological phenomena occurring in the considered culture. Whereas transport phenomena inside and outside the cell culture are often well known, the reaction scheme and associated kinetics are usually a priori unknown, and require special care for their modelling and identification. The second kind of commonly used models belongs to black box modelling. Black boxes consider the system to be modelled in terms of its input and output characteristics. They consist of mathematical function combinations which do not allow any physical interpretation. They are usually used when no a priori information about the system is available. Finally, hybrid or grey box modelling combines the principles of white and black box models. Typically, a hybrid model uses the available prior knowledge while the reaction scheme and/or the kinetics are replaced by a black box, an Artificial Neural Network for instance.<p><p>Among these numerous models, which one has to be used to obtain the best possible representation of a bioprocess? We attempt to answer this question in the first part of this work. On the basis of two simulated bioprocesses and a real experimental one, two model kinds are analysed. First principles models whose reaction scheme and kinetics can be determined thanks to systematic procedures are compared with hybrid model structures where neural networks are used to describe the kinetics or the whole reaction term (i.e. kinetics and reaction scheme). The most common artificial neural networks, the MultiLayer Perceptron and the Radial Basis Function network, are tested. In this work, pure black box modelling is however not considered. Indeed, numerous papers already compare different neural networks with hybrid models. The results of these previous studies converge to the same conclusion: hybrid models, which combine the available prior knowledge with the neural network nonlinear mapping capabilities, provide better results.<p><p>From this model comparison and the fact that a physical kinetic model structure may be viewed as a combination of basis functions such as a neural network, kinetic model structures allowing biological interpretation should be preferred. This is why the second part of this work is dedicated to the improvement of the general kinetic model structure used in the previous study. Indeed, in spite of its good performance (largely due to the associated systematic identification procedure), this kinetic model which represents activation and/or inhibition effects by every culture component suffers from some limitations: it does not explicitely address saturation by a culture component. The structure models this kind of behaviour by an inhibition which compensates a strong activation. Note that the generalization of this kinetic model is a challenging task as physical interpretation has to be improved while a systematic identification procedure has to be maintained.<p><p>The last part of this work is devoted to another kind of biological systems: proteins. Such macromolecules, which are essential parts of all living organisms and consist of combinations of only 20 different basis molecules called amino acids, are currently used in the industrial world. In order to allow their functioning in non-physiological conditions, industrials are open to modify protein amino acid sequence. However, substitutions of an amino acid by another involve thermodynamic stability changes which may lead to the loss of the biological protein functionality. Among several theoretical methods predicting stability changes caused by mutations, the PoPMuSiC (Prediction Of Proteins Mutations Stability Changes) program has been developed within the Genomic and Structural Bioinformatics Group of the Université Libre de Bruxelles. This software allows to predict, in silico, changes in thermodynamic stability of a given protein under all possible single-site mutations, either in the whole sequence or in a region specified by the user. However, PoPMuSiC suffers from limitations and should be improved thanks to recently developed techniques of protein stability evaluation like the statistical mean force potentials of Dehouck et al. (2006). Our work proposes to enhance the performances of PoPMuSiC by the combination of the new energy functions of Dehouck et al. (2006) and the well known artificial neural networks, MultiLayer Perceptron or Radial Basis Function network. This time, we attempt to obtain models physically interpretable thanks to an appropriate use of the neural networks.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
129

Vers l'assimilation de données estimées par radar Haute Fréquence en mer macrotidale / Towards data assimilation with High Frequency Radar currents in macrotidal sea

Jousset, Solène 01 July 2016 (has links)
La Mer d’Iroise est observée depuis 2006, par des radars à haute fréquence (HF) qui estiment les courants de surface. Ces mesures ont une finesse temporelle et spatiale pour permettre de capturer la dynamique fine du domaine côtier. Ce travail de thèse vise à la conception et l’application d’une méthode d’assimilation de ces données dans un modèle numérique réaliste pour optimiser le frottement sur le fond et corriger l’état du modèle afin de mieux représenter la circulation résiduelle de marée et les positions des fronts d’Ouessant en mer d’Iroise. La méthode d’assimilation de données utilisée est le Filtre de Kalman d’Ensemble dont l’originalité est l’utilisation d’une modélisation stochastique pour estimer l’erreur du modèle. Premièrement, des simulations d’ensemble ont été réalisées à partir de la perturbation de différents paramètres du modèle considérés comme sources d’erreur : le forçage météo, la rugosité de fond, la fermeture turbulente horizontale et la rugosité de surface. Ces ensembles ont été explorés en termes de dispersion et de corrélation d’ensemble. Un Lisseur de Kalman d’Ensemble a ensuite été utilisé pour optimiser la rugosité de fond (z0) à partir des données de courant de surface et d’un ensemble modèle réalisé à partir d’un z0 perturbé et spatialisé. La méthode a d’abord été testée en expérience jumelle puis avec des observations réelles. Les cartes du paramètre z0, optimisés, réalisées avec des observations réelles, ont ensuite été utilisées dans le modèle sur une autre période et les résultats ont été comparés avec des observations sur la zone. Enfin, des expériences jumelles ont été mises en place pour corriger l’état modèle. Deux méthodes ont été comparées, une prenant en compte la basse fréquence en filtrant la marée des données et du modèle pour réaliser l’analyse ; l’autre prenant en compte tout le signal. Avec ces expériences, on a tenté d’évaluer la capacité du filtre à contrôler à la fois la partie observée du vecteur d’état (courant de surface) et la partie non-observée du système (température de surface). / The Iroise Sea has been observed since 2006 by High Frequency (HF) radars, which estimate surface currents. These measurements offer high resolution and high frequency to capture the dynamics of the coastal domain. This thesis aims at designing and applying a method of assimilation of these data in a realistic numerical model to optimize the bottom friction and to correct the model state in order to improve the representation of the residual tidal circulation and the positions of the Ushant fronts in the Iroise Sea. The method of data assimilation used is the Ensemble Kalman Filter. The originality of this method is the use of a stochastic modeling to estimate the model error. First, ensemble simulations were carried out from the perturbation of various model parameters which are the model error sources: meteorological forcing, bottom friction, horizontal turbulent closure and surface roughness. These ensembles have been explored in terms of dispersion and correlation. An Ensemble Kalman smoother was used to optimize the bottom friction (z0) from the surface current data and from an ensemble produced from a perturbed and spatialized z0. The method is tested with a twin experiment and then with real observations. The optimized maps of parameter z0, produced with the real currents, were used in the model over another period and the results were compared with independent observations. Finally, twin experiments were conducted to test the model state correction. Two approaches were compared; first, only the low frequency, by filtering the tide in the data and in the model, is used to perform the analysis. The other approach takes the whole signal into account. With these experiments, we assess the filter's ability to control both the observed part of the state vector (currents) and the unobserved part of the system (Sea surface Temperature).
130

Méthodes de diagnostic pour les moteurs de fusée à ergols liquides / Model-based fault diagnosis for rocket engines

Iannetti, Alessandra 30 September 2016 (has links)
Cette thèse a pour objectif de démontrer l'intérêt des outils de diagnostic "intelligents" pour application sur les moteurs de fusée. En Europe beaucoup d'efforts ont été faits pour développer quelques techniques innovantes comme les réseaux neuronaux, les méthodes de suivi de raie vibratoire, ou l'identification paramétrique mais peu de résultats sont disponibles quant à la comparaison des performances de différents algorithmes. Un deuxième objectif de la thèse a été celui d'améliorer le système de diagnostic du banc d'essai Mascotte (ONERA/CNES). Il s'agit d'un banc de démonstration pour les moteurs de fusée de type cryogénique représentatif des conditions d'utilisation d'un vrai moteur. Les étapes de la thèse ont été en premier lieu de choisir et d'évaluer des méthodes de diagnostic à base de modèles, en particulier l'identification paramétrique et le filtre de Kalman, et de les appliquer pour le diagnostic d'un système critique du banc Mascotte: le circuit de refroidissement. Après une première validation des nouveaux algorithmes sur des données d'essais disponibles, un benchmark fonctionnel a été mis en place pour pouvoir comparer les performances des algorithmes sur différents types de cas de panne simulés. La dernière étape consiste à intégrer les algorithmes sur les ordinateurs du banc de contrôle de Mascotte pour pouvoir effectuer une évaluation applicative des performances et de leur intégrabilité à l'environnement informatique déjà en place. Un exemple simple de boucle de régulation intégrant l’information du diagnostic est aussi étudié pour analyser l’importance de telles méthodes dans le contexte plus large d’une régulation « intelligente » du banc. / The main objective of this work is to demonstrate and analyze the potential benefits of advanced real time algorithms for rocket engines monitoring and diagnosis. In the last two decades in Europe many research efforts have been devoted to the development of specific diagnostic technics such as neural networks, vibration analysis or parameter identification but few results are available concerning algorithms comparison and diagnosis performances analysis.Another major objective of this work has been the improvement of the monitoring system of the Mascotte test bench (ONERA/CNES). This is a cryogenic test facility based in ONERA Palaiseau used to perform analysis of cryogenic combustion and nozzle expansion behavior representative of real rocket engine operations.The first step of the work was the selection of a critical system of the bench, the water cooling circuit, and then the analysis of the possible model based technics for diagnostic such as parameter identification and Kalman filters.Three new algorithms were developed, after a preliminary validation based on real test data, they were thoroughly analyzed via a functional benchmark with representative failure cases.The last part of the work consisted in the integration of the diagnosis algorithms on the bench computer environment in order to prepare a set-up for a future real time application.A simple closed loop architecture based on the new diagnostic tools has been studied in order to assess the potential of the new methods for future application in the context of intelligent bench control strategies.

Page generated in 0.0294 seconds