Spelling suggestions: "subject:"ensitivity 2analysis"" "subject:"ensitivity 3analysis""
391 |
Ajuste do modelo matemático de uma aeronave com sistema de aumento de estabilidade com base em ensaios em túnel de vento / Adjustment of an aircraft mathematical model with stability augmentation system based on wind tunnel analysisMattos, Wellington da Silva 03 August 2007 (has links)
O presente trabalho descreve a aplicação de um método de ajuste de modelo, com base em resultados experimentais obtidos em túnel de vento, a uma aeronave com sistema de aumento de estabilidade longitudinal (LSAS). O estudo inclui uma revisão de métodos para ajuste de modelos, o desenvolvimento do modelo matemático da aeronave e uma descrição dos ensaios em túnel de vento da aeronave com o LSAS. O sistema automático de controle é composto de (1) um sistema de aquisição de dados, que processa o sinal do sensor e envia um sinal de comando para o atuador; (2) um potenciômetro, usado como sensor de ângulo de arfagem; e (3) um servo motor, usado como atuador do canard. O modelo de aeronave é baseado no Grumman X-29, que tem asa de enflechamento negativo e canard. Sua margem de estabilidade estática pode ser ajustada mudando a posição do centro de rotação que, por sua vez, coincide com a posição do centro de gravidade da aeronave através de balanceamento do peso. O ajuste do modelo matemático do avião é conduzido, no ambiente Matlab/Simulink, com a modificação dos parâmetros das derivadas de estabilidade da aeronave, do filtro digital e da dinâmica do sensor e do atuador. O objetivo é obter uma correlação ótima entre resultados teóricos e experimentais. O método da análise da sensibilidade paramétrica é escolhido para o ajuste do modelo. Numa primeira fase do estudo, a comparação entre resultados experimentais e numéricos é feita com base nas freqüências e razões de amortecimento da variação do ângulo de arfagem em resposta a uma entrada do tipo impulso de deflexão do canard. Numa segunda fase a comparação é baseada diretamente na resposta no tempo do ângulo de arfagem numérico e experimental para a mesma entrada impulso do canard. Três posições do centro de gravidade são analisadas, uma em que a aeronave é estaticamente estável e duas em que ela é instável. Os resultados mostram grande variação dos parâmetros ajustados indicando a necessidade de aperfeiçoamento na implementação da metodologia utilizada. / The present work describes the application of a model updating method, based on experimental wind tunnel data to an aircraft longitudinal stability augmentation system (LSAS). The study includes a revision of model updating methods, the development of the aircraft mathematical model and the description of a previously conducted, aircraft LSAS wind tunnel testing. The LSAS is comprised by (1) a data acquisition system, which processes the sensor signal and sends the control command to the actuator; (2) a potentiometer, used as a pitch angle sensor; and (3) a servo motor, used to actuate canard deflection. The aircraft model is based on the Grumman X-29, which has canard and forward swept wing. Its static stability margin can be adjusted by changing the center of rotation position which, in turn, coincides with the aircraft center of gravity position through weight balance. The airplane mathematical model updating is carried out, in the Matlab/Simulink environment, by adjusting model parameters for aircraft stability derivatives, digital filter, sensor and servo dynamics. The objective is to obtain an optimal correlation between numerical and experimental results. The parametric sensitivity analysis method is chosen for model updating. In a first phase of the study the comparison between theoretical and experimental results is based on frequencies and damping ratios for aircraft pitch angle response to an impulse canard deflection input. In a second phase the comparison is based directly on experimental and numerical pitch angle time response to the same impulse canard deflection input. Three center of gravity positions are analyzed, one for which the aircraft is statically stable and two for which it is unstable. Results show large variations among adjusted parameters indicating the need for improvements in the implementation of the adopted methodology.
|
392 |
Local Sensitivity Analysis of Nonlinear Models - Applied to Aircraft Vehicle Systems / Lokal känslighetsanalys av icke-linjära modeller - tillämpat på grundflygplansystemJung, Ylva January 2009 (has links)
<p>As modeling and simulation becomes a more important part of the modeling process, the demand on a known accuracy of the results of a simulation has grown more important. Sensitivity analysis (SA) is the study of how the variation in the output of a model can be apportioned to different sources of variation. By performing SA on a system, it can be determined which input/inputs influence a certain output the most. The sensitivity measures examined in this thesis are the Effective Influence Matrix, EIM, and the Main Sensitivity Index, MSI.</p><p>To examine the sensitivity measures, two tests have been made. One on a laboratory equipment including a hydraulic servo, and one on the conceptual landing gear model of the Gripen aircraft. The purpose of the landing gear experiment is to examine the influence of different frictions on the unfolding of the landing gear during emergency unfolding. It is also a way to test the sensitivity analysis method on an industrial example and to evaluate the EIM and MSI methods.</p><p>The EIM and MSI have the advantage that no test data is necessary, which means the robustness of a model can be examined early in the modeling process. They are also implementable in the different stages of the modeling and simulation process. With the SA methods in this thesis, documentation can be produced at all stages of the modeling process. To be able to draw correct conclusions, it is essential that the information that is entered into the analysis at the beginning is well chosen, so some knowledge is required of the model developer in order to be able to define reasonable values to use.</p><p>Wishes from the model developers/users include: the method and model quality measure should be easy to understand, easy to use and the results should be easy to understand. The time spent on executing the analysis has also to be well spent, both in the time preparing the analysis and in analyzing the results.</p><p>The sensitivity analysis examined in this thesis display a good compromise between usefulness and computational cost. It does not demand knowledge in programming, nor does it demand any deeper understanding of statistics, making it available to both the model creators, model users and simulation result users.</p>
|
393 |
Sensitivity, Noise and Detection of Enzyme Inhibition in Progress CurvesGutiérrez Arenas, Omar January 2006 (has links)
<p>Starting with the development of an enzymatic assay, where an enzyme in solution hydrolysed a solid-phase bound peptide, a model for the kinetics of enzyme action was introduced. This model allowed the estimation of kinetic parameters and enzyme activity for a system that has the peculiarity of not being saturable with the substrate, but with the enzyme. In a derivation of the model, it was found that the sensitivity of the signal to variations in the enzyme concentration had a transient increase along the reaction progress with a maximum at high substrate conversion levels. </p><p>The same behaviour was derived for the sensitivity in classical homogeneous enzymatic assays and experimental evidence of this was obtained. The impact of the transient increase of the sensitivity on the error structure, and on the ability of homogeneous end-point enzymatic assays to detect competitive inhibition, came into focus. First, a non-monotonous shape in the standard deviation of progress curve data was found and it was attributed to the random dispersion in the enzyme concentration operating through the transient increase in the sensitivity. Second, a model for the detection limit of the quantity Ki/[I] (the IDL-factor) as a function of the substrate conversion level was developed for homogeneous end-point enzymatic assays. </p><p>It was found that the substrate conversion level where the IDL-factor reached an optimum was beyond the initial velocity range. Moreover, at this optimal point not only the ability to detect inhibitors but also the robustness of the assays was maximized. These results may prove to be relevant in drug discovery for optimising end point homogeneous enzymatic assays that are used to find inhibitors against a target enzyme in compound libraries, which are usually big (>10000) and crowded with irrelevant compounds.</p>
|
394 |
Stochastic finite elements for elastodynamics: random field and shape uncertainty modelling using direct and modal perturbation-based approachesVan den Nieuwenhof, Benoit 07 May 2003 (has links)
The handling of variability effects in structural models is a natural and necessary extension of deterministic analysis techniques. In the context of finite element and uncertainty
modelling, the stochastic finite element method (SFEM), grouping the perturbation SFEM, the spectral SFEM and the Monte-Carlo simulation, has by far received the major attention.
<br>
The present work focuses on second moment approaches, in which the first two statistical moments of the structural response are estimated. Due to its efficiency for handling problems involving low variability levels, the perturbation method is selected for characterising the propagation of the parameter variability from an uncertain dynamic model to its structural response. A dynamic model excited by a time-harmonic loading is postulated and the extension of the perturbation SFEM to the frequency domain is provided. This method complements the deterministic analysis by a sensitivity analysis of the system response with respect to a finite set of random parameters and a response surface in terms of a Taylor series expansion truncated to the first or second order is built. Taking into account the second moment statistical data of the random design properties, the response sensitivities are
appropriately condensed in order to obtain an estimation of the response mean value and covariance structure.
<br>
In order to handle a wide definition of variability, a computational tool is made available that is able to deal with material variability sources (material random variables and fields) as well as shape uncertainty sources. This second case requires an appropriate shape parameterisation and a shape design sensitivity analysis. The computational requirements of the tool are studied and optimised, by reducing the size of the random
dimension of the problem and by improving the performances of the underlying deterministic analyses. In this context, modal approaches, which are known to provide efficient alternatives to direct approaches in frequency domain analyses, are developed. An efficient hybrid procedure, coupling the perturbation and the Monte-Carlo simulation SFEM, is proposed and analysed.
<br>
Finally, the developed methods are validated, by resorting mainly to the Monte-Carlo simulation technique, on different numerical applications: a cantilever beam structure, a plate bending problem (involving a 3-dimensional model), an articulated truss structure and a problem involving a plate with a random flatness default.
The propagation of the model uncertainty in the response FRFs and the effects involved by random field modelling are examined. Some remarks are stated pertaining to the influence of the parameter PDF in simulation-based methods.
<br>
<br>
La gestion de la variabilité présente dans les modèles structuraux est une extension naturelle et nécessaire des techniques de calcul déterministes. En incorporant la modélisation de l'incertitude dans le calcul aux éléments finis, la méthode des éléments finis stochastiques (groupant l'approche perturbative, l'approche
spectrale et la technique de simulation Monte-Carlo) a reçu une large attention de la littérature scientifique.
<br>
Ce travail est orienté sur les approches dites de second moment, dans lesquelles les deux premiers moments statistiques de la réponse de la structure sont estimés. De par son aptitude à
traiter des problèmes caractérisés par de faibles niveaux de variabilité, la méthode perturbative est choisie pour propager la variabilité des paramètres d'un modèle dynamique incertain sur sa réponse. Un modèle sous chargement dynamique harmonique est supposé et l'extension dans le domaine fréquentiel de l'approche perturbative est établie. Cette méthode complète l'analyse déterministe par une analyse de sensibilité de la réponse du système par rapport à un ensemble fini de variables aléatoires. Une surface de réponse en termes d'un développement de Taylor tronqué au premier ou second ordre peut alors être écrit. Les
sensibilités de la réponse sont enfin condensées, en tenant compte des propriétés statistiques des paramètres de design aléatoires,
pour obtenir une estimation de la valeur moyenne et de la structure de covariance de la réponse.
<br>
Un outil de calcul est développé avec la capacité de gestion d'une définition large de la variabilité: sources de variabilité matérielle (variables et champs aléatoires) ainsi que géométrique. Cette dernière source requiert une paramétrisation adéquate de la géométrie ainsi qu'une analyse de sensibilité à des paramètres de
forme. Les exigences calcul de cet outil sont étudiées et optimisées, en réduisant la dimension aléatoire du problème et en améliorant les performances des analyses déterministes
sous-jacentes. Dans ce contexte, des approches modales, fournissant une alternative efficace aux approches directes dans le domaine fréquentiel, sont dérivées. Une procédure hybride
couplant la méthode perturbative et la technique de simulation Monte-Carlo est proposée et analysée.
<br>
Finalement, les méthodes étudiées sont validées, principalement sur base de résultats de simulations Monte-Carlo. Ces résultats sont relatifs à plusieurs applications numériques: une structure poutre-console, un problème de flexion de plaque (modèle tridimensionnel), une structure en treillis articulé et un problème de plaque présentant un défaut de planéité aléatoire. La propagation de l'incertitude du modèle dans les fonctions de réponse fréquentielle ainsi que les effets propres à la
modélisation par champs aléatoires sont examinés. Quelques remarques relatives à l'influence de la loi de distribution des
paramètres dans les méthodes de simulation sont évoquées.
|
395 |
Efficient Simulation, Accurate Sensitivity Analysis and Reliable Parameter Estimation for Delay Differential EquationsZivariPiran, Hossein 03 March 2010 (has links)
Delay differential equations (DDEs) are a class of differential equations that have received considerable recent attention and
been shown to model many real life problems, traditionally formulated as systems of ordinary differential equations (ODEs),
more naturally and more accurately. Ideally a DDE modeling package should provide facilities for approximating the solution,
performing a sensitivity analysis and estimating unknown parameters. In this thesis we propose new techniques for efficient simulation, accurate sensitivity analysis and reliable parameter estimation of DDEs.
We propose a new framework for designing a delay differential equation (DDE) solver which works with any supplied initial value
problem (IVP) solver that is based on a general linear method (GLM) and can provide dense output. This is done by treating a
general DDE as a special example of a discontinuous IVP. We identify a precise process for the numerical techniques used when solving the implicit equations that arise on a time step, such as when the underlying IVP solver is implicit or the delay vanishes.
We introduce an equation governing the dynamics of sensitivities for the most general system of parametric DDEs. Then, having a similar view as the simulation (DDEs as discontinuous ODEs), we introduce a formula for finding the size of jumps that appear at discontinuity points when the sensitivity equations are integrated. This leads to an algorithm which can compute
sensitivities for various kind of parameters very accurately.
We also develop an algorithm for reliable parameter identification of DDEs. We propose a method for adding extra constraints to the
optimization problem, changing a possibly non-smooth optimization to a smooth problem. These constraints are effectively handled
using information from the simulator and the sensitivity analyzer.
Finally, we discuss the structure of our evolving modeling package DDEM. We present a process that has been used for incorporating
existing codes to reduce the implementation time. We discuss the object-oriented paradigm as a way of having a manageable design with reusable and customizable components. The package is programmed in C++ and provides a user-friendly calling sequences. The numerical results are very encouraging and show the effectiveness of the techniques.
|
396 |
Efficient Simulation, Accurate Sensitivity Analysis and Reliable Parameter Estimation for Delay Differential EquationsZivariPiran, Hossein 03 March 2010 (has links)
Delay differential equations (DDEs) are a class of differential equations that have received considerable recent attention and
been shown to model many real life problems, traditionally formulated as systems of ordinary differential equations (ODEs),
more naturally and more accurately. Ideally a DDE modeling package should provide facilities for approximating the solution,
performing a sensitivity analysis and estimating unknown parameters. In this thesis we propose new techniques for efficient simulation, accurate sensitivity analysis and reliable parameter estimation of DDEs.
We propose a new framework for designing a delay differential equation (DDE) solver which works with any supplied initial value
problem (IVP) solver that is based on a general linear method (GLM) and can provide dense output. This is done by treating a
general DDE as a special example of a discontinuous IVP. We identify a precise process for the numerical techniques used when solving the implicit equations that arise on a time step, such as when the underlying IVP solver is implicit or the delay vanishes.
We introduce an equation governing the dynamics of sensitivities for the most general system of parametric DDEs. Then, having a similar view as the simulation (DDEs as discontinuous ODEs), we introduce a formula for finding the size of jumps that appear at discontinuity points when the sensitivity equations are integrated. This leads to an algorithm which can compute
sensitivities for various kind of parameters very accurately.
We also develop an algorithm for reliable parameter identification of DDEs. We propose a method for adding extra constraints to the
optimization problem, changing a possibly non-smooth optimization to a smooth problem. These constraints are effectively handled
using information from the simulator and the sensitivity analyzer.
Finally, we discuss the structure of our evolving modeling package DDEM. We present a process that has been used for incorporating
existing codes to reduce the implementation time. We discuss the object-oriented paradigm as a way of having a manageable design with reusable and customizable components. The package is programmed in C++ and provides a user-friendly calling sequences. The numerical results are very encouraging and show the effectiveness of the techniques.
|
397 |
Local Sensitivity Analysis of Nonlinear Models - Applied to Aircraft Vehicle Systems / Lokal känslighetsanalys av icke-linjära modeller - tillämpat på grundflygplansystemJung, Ylva January 2009 (has links)
As modeling and simulation becomes a more important part of the modeling process, the demand on a known accuracy of the results of a simulation has grown more important. Sensitivity analysis (SA) is the study of how the variation in the output of a model can be apportioned to different sources of variation. By performing SA on a system, it can be determined which input/inputs influence a certain output the most. The sensitivity measures examined in this thesis are the Effective Influence Matrix, EIM, and the Main Sensitivity Index, MSI. To examine the sensitivity measures, two tests have been made. One on a laboratory equipment including a hydraulic servo, and one on the conceptual landing gear model of the Gripen aircraft. The purpose of the landing gear experiment is to examine the influence of different frictions on the unfolding of the landing gear during emergency unfolding. It is also a way to test the sensitivity analysis method on an industrial example and to evaluate the EIM and MSI methods. The EIM and MSI have the advantage that no test data is necessary, which means the robustness of a model can be examined early in the modeling process. They are also implementable in the different stages of the modeling and simulation process. With the SA methods in this thesis, documentation can be produced at all stages of the modeling process. To be able to draw correct conclusions, it is essential that the information that is entered into the analysis at the beginning is well chosen, so some knowledge is required of the model developer in order to be able to define reasonable values to use. Wishes from the model developers/users include: the method and model quality measure should be easy to understand, easy to use and the results should be easy to understand. The time spent on executing the analysis has also to be well spent, both in the time preparing the analysis and in analyzing the results. The sensitivity analysis examined in this thesis display a good compromise between usefulness and computational cost. It does not demand knowledge in programming, nor does it demand any deeper understanding of statistics, making it available to both the model creators, model users and simulation result users.
|
398 |
Sensitivity Analysis and Distortion Decomposition of Mildly Nonlinear CircuitsZhu, Guoji January 2007 (has links)
Volterra Series (VS) is often used in the analysis of mildly nonlinear circuits. In this approach,
nonlinear circuit analysis is converted into the analysis of a series of linear circuits. The main
benefit of this approach is that linear circuit analysis is well established and direct frequency
domain analysis of a nonlinear circuit becomes possible.
Sensitivity analysis is useful in comparing the quality of two designs and the evaluation of
gradient, Jacobian or Hessian matrices, in analog Computer Aided Design. This thesis presents, for
the first time, the sensitivity analysis of mildly nonlinear circuits in the frequency domain as an
extension of the VS approach. To overcome efficiency limitation due to multiple mixing effects,
Nonlinear Transfer Matrix (NTM) is introduced. It is the first explicit analytical representation of
the complicated multiple mixing effects. The application of NTM in sensitivity analysis is capable
of two orders of magnitude speedup.
Per-element distortion decomposition determines the contribution towards the total distortion
from an individual nonlinearity. It is useful in design optimization, symbolic simplification and
nonlinear model reduction. In this thesis, a numerical distortion decomposition technique is
introduced which combines the insight of traditional symbolic analysis with the numerical
advantages of SPICE like simulators. The use of NTM leads to an efficient implementation. The
proposed method greatly extends the size of the circuit and the complexity of the transistor model
over what previous approaches could handle. For example, industry standard compact model, such
as BSIM3V3 [35] was used for the first time in distortion analysis. The decomposition can be
achieved at device, transistor and block level, all with device level accuracy.
The theories have been implemented in a computer program and validated on examples. The
proposed methods will leverage the performance of present VS based distortion analysis to the next
level.
|
399 |
Steady-State Analyses: Variance Estimation in Simulations and Dynamic Pricing in Service SystemsAktaran-Kalayci, Tuba 04 August 2006 (has links)
In this dissertation, we consider analytic and numeric approaches to the solution of probabilistic steady-state problems with specific applications in simulation and queueing theory.
Our first objective on steady-state simulations is to develop new estimators for the variance parameter of a selected output process that have better performance than certain existing variance estimators in the literature. To complete our analysis of these new variance estimators, called linear combinations of overlapping variance estimators, we do the following: establish theoretical asymptotic properties of the new estimators; test the theoretical results on a battery of examples to see how the new estimators perform in practice; and use the estimators for confidence interval estimation for both the mean and the variance parameter. Our theoretical and empirical results indicate the new estimators' potential for improvements in accuracy and computational efficiency.
Our second objective on steady-state simulations is to derive the expected values of various competing estimators for the variance parameter. In this research, we do the following: formulate the machinery to calculate the exact expected value of a given estimator for the variance parameter; calculate the exact expected values of various variance estimators in the literature; compute these expected values for certain stochastic processes with complicated covariance functions; and derive expressions for the mean squared error of the estimators studied herein. We find that certain standardized time series estimators outperform their competitors as the sample size becomes large.
Our research on queueing theory focuses on pricing of the service provided to individual customers in a queueing system. We find sensitivity results that enable efficient computational procedures for dynamic pricing decisions for maximizing the long-run average reward in a queueing facility with the following properties: there are a fixed number of servers, each with the same constant service rate; the system has a fixed finite capacity; the price charged to a customer entering the system depends on the number of customers in the system; and the customer arrival rate depends on the current price of the service. We show that the sensitivity results considered significantly reduce the computational requirements for finding the optimal pricing policies.
|
400 |
Sensitivity Analysis in Air Quality Models for Particulate MatterNapelenok, Sergey L. 31 October 2006 (has links)
Fine particulate matter (PM2.5) has been associated with a variety of problems that include adverse health effects, reduction in visibility, damage to buildings and crops, and possible interactions with climate. Although stringent air quality regulations are in place, policy makers need efficient tools to test a wide range of control strategies. Sensitivity analysis provides predictions on how the interdependent concentrations of various PM2.5 components and also gaseous pollutant species will respond to specific combinations of precursor emission reductions. The Community Multiscale Air Quality Model (CMAQ) was outfitted with the Decoupled Direct Method in 3D for calculating sensitivities of particulate matter (DDM-3D/PM). This method was evaluated and applied to high PM2.5 episodes in the Southeast United States. Sensitivities of directly emitted particles as well as those formed in the atmosphere through chemical and physical processing of emissions of gaseous precursors such as SO2, NOx, VOCs, and NH3 were calculated. DDM-3D/PM was further extended to calculate receptor oriented sensitivities or the Area of Influence (AOI). AOI analysis determines the geographical extent of relative air pollutant precursor contributions to pollutant levels at a specific receptor of interest. This method was applied to Atlanta and other major cities in Georgia. The tools developed here (DDM-3D/PM and AOI) provide valuable information to those charged with air quality management.
|
Page generated in 0.0924 seconds