Spelling suggestions: "subject:"ensitivity 2analysis"" "subject:"ensitivity 3analysis""
231 |
Analyse de sensibilité globale et polynômes de chaos pour l'estimation des paramètres : application aux transferts en milieu poreux / Sensitivity analysis and polynomial chaos expansion for parameter estimation : application to transfer in porous mediaFajraoui, Noura 21 January 2014 (has links)
La gestion des transferts des contaminants en milieu poreux représentent une préoccupation croissante et revêtent un intérêt particulier pour le contrôle de la pollution dans les milieux souterrains et la gestion de la ressource en eau souterraine, ou plus généralement la protection de l’environnement. Les phénomènes d’écoulement et de transport de polluants sont décrits par des lois physiques traduites sous forme d'équations algébro-différentielles qui dépendent d'un grand nombre de paramètres d'entrée. Pour la plupart, ces paramètres sont mal connus et souvent ne sont pas directement mesurables et/ou leur mesure peut être entachée d’incertitude. Ces travaux de thèse concernent l’étude de l’analyse de sensibilité globale et l’estimation des paramètres pour des problèmes d’écoulement et de transport en milieux poreux. Pour mener à bien ces travaux, la décomposition en polynômes de chaos est utilisée pour quantifier l'influence des paramètres sur la sortie des modèles numériques utilisés. Cet outil permet non seulement de calculer les indices de sensibilité de Sobol mais représente également un modèle de substitution (ou métamodèle) beaucoup plus rapide à exécuter. Cette dernière caractéristique est alors exploitée pour l'inversion des modèles à partir des données observées. Pour le problème inverse, nous privilégions l'approche Bayésienne qui offre un cadre rigoureux pour l'estimation des paramètres. Dans un second temps, nous avons développé une stratégie efficace permettant de construire des polynômes de chaos creux, où seuls les coefficients dont la contribution sur la variance du modèle est significative, sont retenus. Cette stratégie a donné des résultats très encourageants pour deux problèmes de transport réactif. La dernière partie de ce travail est consacrée au problème inverse lorsque les entrées du modèle sont des champs stochastiques gaussiens spatialement distribués. La particularité d'un tel problème est qu'il est mal posé car un champ stochastique est défini par une infinité de coefficients. La décomposition de Karhunen-Loève permet de réduire la dimension du problème et également de le régulariser. Toutefois, les résultats de l'inversion par cette méthode fournit des résultats sensibles au choix à priori de la fonction de covariance du champ. Un algorithme de réduction de la dimension basé sur un critère de sélection (critère de Schwartz) est proposé afin de rendre le problème moins sensible à ce choix. / The management of transfer of contaminants in porous media is a growing concern and is of particular interest for the control of pollution in underground environments and management of groundwater resources, or more generally the protection of the environment. The flow and transport of pollutants are modeled by physical and phenomenological laws that take the form of differential-algebraic equations. These models may depend on a large number of input parameters. Most of these parameters are well known and are often not directly observable.This work is concerned with the impact of parameter uncertainty onto model predictions. To this end, the uncertainty and sensitivity analysis is an important step in the numerical simulation, as well as inverse modeling. The first study consists in estimating the model predictive uncertainty given the parameters uncertainty and identifying the most relevant ones. The second study is concerned with the reduction of parameters uncertainty from available observations.This work concerns the study of global sensitivity analysis and parameter estimation for problems of flow and transport in porous media. To carry out this work, the polynomials chaos expansion is used to quantify the influence of the parameters on the predictions of the numerical model. This tool not only calculate Sobol' sensitivity indices but also provides a surrogate model (or metamodel) that is faster to run. This feature is then exploited for models inversion when observations are available. For the inverse problem, we focus on Bayesian approach that offers a rigorous framework for parameter estimation.In a second step, we developed an effective strategy for constructing a sparse polynomials chaos expansion, where only coefficients whose contribution to the variance of the model is significant, are retained. This strategy has produced very encouraging results for two reactive transport problems.The last part of this work is devoted to the inverse problem when the inputs of the models are spatially distributed. Such an input is then treated as stochastic fields. The peculiarity of such a problem is that it is ill-posed because a stochastic field is defined by an infinite number of coefficients. The Karhunen-Loeve reduces the dimension of the problem and also allows regularizing it. However, the inversion with this method provides results that are sensitive to the presumed covariance function. An algorithm based on the selection criterion (Schwartz criterion) is proposed to make the problem less sensitive to this choice.
|
232 |
Cost-Benefit Analysis of Fish Processing in GhanaArthur, Elizabeth Raheema January 2010 (has links)
The main objective of this diploma thesis is to ascertain the level of profitability from fish processing (canning) using the company MYROC Food Processing Company as a case study from 2005 to 2008. The results of the work were done using some financial statements from the company. The diploma thesis consists of two parts. The first part is the theoretical part which describes the fishing industry in Ghana and the benefits of fish processing, concept of cost and benefit analysis, sensitivity analysis and concept of financial analysis. The second part is the practical part where, the cost-benefit analysis, sensitivity analysis and some financial ratios were used. There is also the bankruptcy model that is used in predicting financial distress. In the conclusion, there are some recommendations for improving the financial situation of the company.
|
233 |
Um modelo de simulador para ambientes de desenvolvimento de processos de software utilizando a análise da sensibilidadeDertzbacher, Juliano January 2011 (has links)
A construção de um software envolve alto grau de risco e exige do gerente muito planejamento para atender as estimativas orçamentárias e cumprir os prazos estipulados. No contexto dos processos de software, são escassos os recursos tecnológicos que permitam extrair conhecimento dos processos modelados nos PSEEs e apontem quais fatores provocam os maiores impactos no resultado final, fornecendo novas perspectivas para melhorar a gerência. Para suprir estas carências, é possível utilizar a simulação na obtenção de informações sobre as atividades do processo e a análise da sensibilidade na identificação das variáveis que influenciam de forma mais significativa nos resultados. Neste sentido, este trabalho propõe um modelo de simulador, integrado a uma ferramenta de apoio à gerência de projetos centrados em processos, que utiliza os dados da base do PSEE, oferece recursos para manipular os dados do processo de forma determinística ou estocástica (simulação), permite testar vários cenários e possibilita a análise de quais variáveis impactam de forma mais significativa no resultado final (análise da sensibilidade), antes de iniciar a execução das atividades. O desenvolvimento do modelo de simulador foi fundamentado nos conhecimentos adquiridos com a revisão sistemática dos trabalhos publicados sobre simulação nos últimos anos e também na avaliação comparativa dos recursos tecnológicos oferecidos pelas ferramentas identificadas nas publicações selecionadas na revisão. Os resultados obtidos com a implementação do modelo proposto, utilizando as informações de um estudo de caso real, modelado no WebAPSEE, forneceram informações que indicam melhorias em relação ao custo e ao tempo de desenvolvimento do processo em estudo, assim como a identificação da variável de maior sensibilidade, permitindo otimizar a execução destas atividades. / The construction of a software involves high degree of risk and requires of the manager a lot of planning to attend the budget estimates and meet deadlines. In the context of software processes, there are limited technological resources that allow the extract of knowledge of the processes modeled in PSEEs and indicate what factors cause the greatest impact on the final result, providing new opportunities to improve management. To overcome these deficiencies, is possible to use simulation to obtain information about the activities of the process and sensitivity analysis to identify the variables that most significantly influence the results. Thus, this work proposes a simulation model, integrated to a tool that support project management centered processes, which uses data from the base of the PSEE, offers resources to handle the process data in a deterministic or stochastic way (simulation), allows the testing of various scenarios and enables the analysis of which variables most significantly impact the final result (sensitivity analysis), before starting the execution of activities. The simulation model development was based on the knowledge gained with a systematic review of the papers on simulation in recent years and a comparative evaluation of technological resources offered by the tools identified in the selected papers in the review. The results obtained with the implementation of the proposed model, using the information in a real case study, modeled on WebAPSEE, provided information that indicates improvements in relation to the cost and time development of the process under study, as well as the identification of the variable with the greatest sensitivity, allowing to optimize the performance of these activities.
|
234 |
STUDY OF PARTICLE SWARM FOR OPTIMAL POWER FLOW IN IEEE BENCHMARK SYSTEMS INCLUDING WIND POWER GENERATORSAbuella, Mohamed A. 01 December 2012 (has links)
AN ABSTRACT OF THE THESIS OF Mohamed A. Abuella, for the Master of Science degree in Electrical and Computer Engineering, presented on May 10, 2012, at Southern Illinois University Carbondale. TITLE:STUDY OF PARTICLE SWARM FOR OPTIMAL POWER FLOW IN IEEE BENCHMARK SYSTEMS INCLUDING WIND POWER GENERATORS MAJOR PROFESSOR: Dr. C. Hatziadoniu, The aim of this thesis is the optimal economic dispatch of real power in systems that include wind power. The economic dispatch of wind power units is quite different of conventional thermal units. In addition, the consideration should take the intermittency nature of wind speed and operating constraints as well. Therefore, this thesis uses a model that considers the aforementioned considerations in addition to whether the utility owns wind turbines or not. The optimal power flow (OPF) is solved by using one of the modern optimization algorithms: the particle swarm optimization algorithm (PSO). IEEE 30-bus test system has been adapted to study the implementation PSO algorithm in OPF of conventional-thermal generators. A small and simple 6-bus system has been used to study OPF of a system that includes wind-powered generators besides to thermal generators. The analysis of investigations on power systems is presented in tabulated and illustrative methods to lead to clear conclusions.
|
235 |
Análise de sensibilidade das estimativas ao erro amostral, posicional e suas aplicaçõesSilva, Victor Miguel January 2015 (has links)
Desde que um depósito mineral apenas terá sua geometria e propriedades reais conhecidas após a completa extração e processamento, é necessário o emprego de modelos e estimativas ao longo da vida do projeto para seu correto planejamento. Estimativas são fortemente afetadas pela qualidade dos dados, o que torna fundamental seu controle e certificação. Tal necessidade levou à industrial mineral a adotar controles e procedimentos que meçam e garantam a qualidade da informação. Esses controles para amostragem e análises laboratoriais são baseados em valores de tolerância máxima de erros tipicamente baseada na literatura ou tidos como boas práticas, apesar de tais valores não levarem em conta a precisão e acurácia necessária em um dado projeto ou características especificas de um projeto. Nesse contexto se propõe, através da análise de sensibilidade, uma metodologia que mensure como os erros analíticos e/ou locacionais se propagam às estimativas. Assumindo a base de dados inicial como isenta de erros, outras bases são geradas pela adição de incerteza; as curvas de sensibilidades relacionam o impacto das estimativas à incerteza adicionada a cada base de dados. Tais erros são sorteados dentro de uma distribuição normal com média zero e diferentes desvios-padrões (simulando diferentes níveis de incerteza) para teores e posição espacial. O impacto é medido pela comparação das estimativas com a base se dado sem perturbação e os os modelos derivados de bases perturbadas. Os resultados são apresentandos através de perda/diluição, correlação entre blocos estimados com os dados originais e perturbados e pelo seu desvio médio. A metodologia proposta foi aplicada nos dados de Walker Lake (Nevada-USA) e parte do depósito de bauxita de Miraí (MG-Brazil). Baseado na curva de sensibilidade, pôde-se avaliar impacto na reconciliação diária da mina de Miraí, causada pela incerteza dos dados. Os resultados indicaram que o impacto de erros nas coordenadas tem um comportamento exponencial, onde erros relativos até 10% da dimensão da malha causam um baixo desvio nas estimativas. A incerteza analítica em geral é reduzida na incerteza dos valores krigados. Definindo como aceitável na mina de Miraí que 90% dos blocos devam ter um máximo de incerteza causada pelos dados de 10% para Al2O3 e recuperação mássica e de 30% para sílica reativa, a metodologia proposta definiu uma tolerância para o controle de qualidade das duplicatas de campo respectivamente de 10.9%, 9.5% e 12.5%. Os erros calculados coincidem com os 10% tipicamente sugeridos pela literatura. / Since a mineral deposit only has its exact geometry and properties known after its complete extraction and processing, it is necessary to use models and estimates throughout the life of the project for proper planning. Estimates are strongly affected by data quality; consequently, their control and certification are imperative. This leads the mineral industry to adopt controls and procedures to measure and ensure data quality. These controls used for sampling and lab analysis are based on max error tolerance intervals typically suggested in the literature, however these limits do not take into account the precision and accuracy required in a given project or in a specific mineral deposit. In this context, this dissertation proposes throughout sensitivity analysis a methodology that measures how the analytical and/or locational errors propagate to estimates. It started using an original dataset considered error free, and databases created from it by adding errors. Sensitivity curves relate the impact on block estimate due to the uncertainty added to each database. The samples errors are draw from a normal distribution with zero mean and different standard deviations (emulating different precision levels) for grades and spatial position. The impact is measure by comparing estimates based on an error-free database against the model derived from error added data sets counting blocks affected by loss/dilution, correlation between original block model versus new model with error added to the data and the block mean deviations. The methodology proposed was illustrated using the Walker Lake (Nevada-USA) dataset and part of Miraí bauxite deposit (MG-Brazil). Based on the defined sensitivity curve the impact on the daily grade reconciliation at Miraí mine caused by using a database with error added is assessed. The results indicates that the impact on grade estimation caused by data coordinates uncertainty has an exponential behavior, where relative errors below 10% of the sampling spacing cause low impact on estimated values. The kriging weighted averaging in general mostly reduces the analytical uncertainty. Defining as acceptable for Miraí Mine that 90% of blocks should have errors (caused by poor quality data) lower than 10% related to Al2O3 and for mass recovery and of 30% for reactive silica, the proposed methodology defines a quality control tolerance to field replicate respectively of 10.9%, 9.5% and 12.5%. The defined error limits coincided with the values suggested by literature at 10%.
|
236 |
Uso de pseudomedidas em estimador de estados para cálculo de distorção harmônica em sistemas elétricosPulz, Lucas Tupi Caldas January 2017 (has links)
A maior presença de correntes harmônicas no sistema de distribuição, principalmente devido à geração distribuída, tem chamado atenção sobre suas possíveis consequências. O trabalho apresenta um método para a avaliação de harmônicas em um sistema elétrico através de um estimador de estados. A proposta é um método de supervisão da rede de distribuição utilizando o menor número de medidores possível. Isso foi feito identificando topologias de rede que viabilizam o uso de pseudomedidas no lugar de medidores. O método é aplicado a um estudo de caso baseado no modelo IEEE 13 barras e os resultados do estimador de estados foram comparados a uma simulação. Também é feita uma análise de sensibilidade do código, observando os resultados quando se adicionam erros sobre as medidas e sobre os parâmetros das linhas do sistema. / The growth of harmonic currents in distribution system, mainly due the distributed generation, is calling attention of the specialists to its possible consequences. This work presents a method to assessment of harmonics in an electric power system through a state estimator. The proposal is a method to monitor the distribution network using as few measurement devices as possible. It was performed identifying network topologies where a pseudomeasurement can replace a measurement device. The method was applied to a study case based on the IEEE 13 buses model and its results were compared to a simulation. A sensitivity analysis of the code also was performed, errors were added to measurements and lines parameters to assess the errors in the state estimator results.
|
237 |
Análise de sensibilidade do modelo de fluxos de detritos : Kanako-2D / Sensitivity analysis of debris flow model : Kanako-2DPaixão, Maurício Andrades January 2017 (has links)
Por se tratar de um fenômeno complexo, a modelagem computacional tem sido utilizada na tentativa de simular o comportamento de fluxos de detritos. Um dos modelos computacionais é o Kanako-2D. O presente trabalho realizou análise de sensibilidade desse modelo em relação a alcance, área de erosão, área de deposição, área total atingida e largura do fluxo. Os valores dos parâmetros de entrada do Kanako-2D, cujas faixas de variação foram estabelecidas a partir de revisão bibliográfica, foram alterados individualmente enquanto os demais eram mantidos nos valores padrão do modelo. Os parâmetros analisados foram: diâmetro dos sedimentos, coeficiente de rugosidade de Manning, coeficiente de taxa de deposição, coeficiente de taxa de erosão, massa específica da fase fluida, massa específica do leito, concentração de sedimentos e ângulo de atrito interno. Foi utilizada uma vertente real com histórico de ocorrência de fluxos de detritos (bacia hidrográfica do arroio Böni, em Alto Feliz e São Vendelino/RS) e uma vertente hipotética com as mesmas características da vertente real para avaliar o efeito da topografia na propagação do fluxo. Também foram simuladas diferentes condições de terreno na propagação do fluxo de detritos. A sensibilidade do modelo foi quantificada a partir de três métodos: (a) análise por rastreamento, que indicou massa específica do leito, ângulo de atrito interno e concentração de sedimentos como os parâmetros que causam maior sensibilidade do modelo; (b) análise regional, indicando que os parâmetros massa específica do leito, ângulo de atrito interno e massa específica da fase fluida apresentaram maior sensibilidade do modelo; e (c) análise das variâncias, em que os parâmetros que mais causaram sensibilidade ao modelo foram coeficiente de taxa de erosão, diâmetro dos sedimentos e massa específica do leito. Os resultados apontaram que, de maneira geral, os parâmetros que geram maior sensibilidade no modelo são massa específica do leito, ângulo de atrito interno e concentração de sedimentos. As maiores variações relativas, no entanto, foram observadas nos parâmetros massa específica do leito, ângulo de atrito interno e massa específica da fase fluida. As maiores sensibilidades foram verificadas, em ordem decrescente, para área de erosão, área total, área de erosão, área de deposição, alcance e largura na vertente real e, para área total, alcance e largura na vertente hipotética. A condição de terreno que gerou maior alcance e área atingida foi de 45° de inclinação na encosta e 17° de inclinação na planície aluvial. / Due to a complex phenomenon, computational modeling has been used in an attempt to simulate the behavior of debris flows. One of the computational models is Kanako-2D. The present work carried out sensitivity analysis of this model in relation to length, erosion area, deposition area, total reached area and flow width. The values of the Kanako-2D input parameters, which ranges were established from literature review, were individually changed while the others were kept at the standard values of the model. The analyzed parameters were: sediment diameter, Manning roughness coefficient, coefficient of deposition rate, coefficient of erosion rate, mass density of the fluid phase, mass density of bed material, sediment concentration and internal friction angle. It was used a real slope-site with a history of occurrences of debris flow (Böni river basin in Alto Feliz and São Vendelino/RS) and a hypothetical slope-site with the same characteristics of the real one to evaluate the effect of the topography in the propagation of the flow. Different hillslope and alluvial fan conditions were also simulated in order to evaluate the length and reached area in the propagation of the flow. The sensitivity was quantified from three methods: (a) screening analysis, which indicated mass density of bed material, internal friction angle and sediment concentration as the parameters that cause bigger sensitivity in the model; (b) regionalized analysis, indicate that the parameters mass density of bed material, internal friction angle and mass density of the fluid phase showed higher sensitivity in the model; and (c) variances analysis, indicated that coefficient of erosion rate, sediment diameter and mass density of bed material showed higher sensitivity in the model. The results showed that, in general, the parameters that generate the higher sensitivity in the model are mass density of bed material, internal friction angle and concentration. The largest relative variation, however, in the response of the model were observed in mass density of bed material, internal friction angle and mass density of fluid phase. In descending order, the highest sensitivities were verified for erosion area, total area, deposition area, length and width for the real slope-site and total area, length and width for the hypothetical slope-site. The terrain condition that generated the largest length and reached area was 45° on the hillslope and 17º on the alluvial fan.
|
238 |
Impact de la variabilité des données météorologiques sur une maison basse consommation. Application des analyses de sensibilité pour les entrées temporelles. / Impact of the variability of weather data on a low energy house. Application of sensitivity analysis for correlated temporal inputs.Goffart, Jeanne 12 December 2013 (has links)
Ce travail de thèse s'inscrit dans le cadre du projet ANR FIABILITE qui porte sur la fiabilité des logiciels de simulation thermique dynamique et plus particulièrement sur les sources potentielles de biais et d'incertitude dans le domaine de la modélisation thermique et énergétique des bâtiments basse consommation. Les sollicitations telles que les occupants, la météo ou encore les scénarios de consommation des usages font partie des entrées les plus incertaines et potentiellement les plus influentes sur les performances d'un bâtiment basse consommation. Il est nécessaire pour pouvoir garantir des performances de déterminer les dispersions de sortie associées à la variabilité des entrées temporelles et d'en déterminer les variables responsables pour mieux réduire leur variabilité ou encore concevoir le bâtiment de manière robuste. Pour répondre à cette problématique, on se base sur les indices de sensibilité de Sobol adaptés aux modèles complexes à grandes dimensions tels que les modèles de bâtiment pour la simulation thermique dynamique. La gestion des entrées fonctionnelles étant un verrou scientifique pour les méthodes d'analyse de sensibilité standard, une méthodologie originale a été développée dans le cadre de cette thèse afin de générer des échantillons compatibles avec l'estimation de la sensibilité. Bien que la méthode soit générique aux entrées fonctionnelles, elle a été validée dans ce travail de thèse pour le cas des données météorologiques et tout particulièrement à partir des fichiers météo moyens (TMY) utilisés en simulation thermique dynamique. Les deux aspects principaux de ce travail de développement résident dans la caractérisation de la variabilité des données météorologiques et dans la génération des échantillons permettant l'estimation de la sensibilité de chaque variable météorologique sur la dispersion des performances d'un bâtiment. A travers différents cas d'application dérivés du modèle thermique d'une maison basse consommation, la dispersion et les paramètres influents relatifs à la variabilité météorologique sont estimés. Les résultats révèlent un intervalle d'incertitude sur les besoins énergétiques de l'ordre de 20% à 95% de niveau de confiance, dominé par la température extérieure et le rayonnement direct. / This thesis is part of the ANR project FIABILITE dealing with the reliability of dynamic thermal simulation softwares and particularly with the potential sources of bias and uncertainties in the field of thermal and energy modeling of low consumption buildings. The solicitations such as the occupancy schedules, the weather data or the usage scenarios are among the most uncertain and potentially most influential inputs on the performance of a low energy building. To ensure the efficiency of such buildings, we need to determine the outputs dispersion associated with the uncertainty of the temporal inputs as well as to emphasize the variables responsible for the dispersion of the output in order to design the building in a robust manner. To address this problem, we have used the sensitivity indices of Sobol adapted to complex models with high dimensions, such as building models for dynamic thermal simulations. The management of the functional inputs being a lock for the scientific methods of standard sensitivity analysis, an innovative methodology was developed in the framework of this thesis in order to generate consistent samples with the estimate of the sensitivity. Although the method can incorporate generic functional inputs, it has been validated in this thesis using meteorological data and especially the typical meteorological year (TMY files) used in dynamic thermal simulations. The two main aspects of this development work lie in the characterization of the variability of meteorological data and the generation of samples to estimate the sensitivity of each weather variable dispersion on the thermal and energy performances of a building. Through various case studies derived from the thermal model of a low-energy house, the dispersion and influential parameters for meteorological variability are estimated. Results show a large range of uncertainties in the energy requirements from about 20 % at a confidence level of 95%.
|
239 |
Quantification des incertitudes et analyse de sensibilité pour codes de calcul à entrées fonctionnelles et dépendantes / Stochastic methods for uncertainty treatment of functional variables in computer codes : application to safety studiesNanty, Simon 15 October 2015 (has links)
Cette thèse s'inscrit dans le cadre du traitement des incertitudes dans les simulateurs numériques, et porte plus particulièrement sur l'étude de deux cas d'application liés aux études de sûreté pour les réacteurs nucléaires. Ces deux applications présentent plusieurs caractéristiques communes. La première est que les entrées du code étudié sont fonctionnelles et scalaires, les entrées fonctionnelles étant dépendantes entre elles. La deuxième caractéristique est que la distribution de probabilité des entrées fonctionnelles n'est connue qu'à travers un échantillon de ces variables. La troisième caractéristique, présente uniquement dans un des deux cas d'étude, est le coût de calcul élevé du code étudié qui limite le nombre de simulations possibles. L'objectif principal de ces travaux de thèse était de proposer une méthodologie complète de traitement des incertitudes de simulateurs numériques pour les deux cas étudiés. Dans un premier temps, nous avons proposé une méthodologie pour quantifier les incertitudes de variables aléatoires fonctionnelles dépendantes à partir d'un échantillon de leurs réalisations. Cette méthodologie permet à la fois de modéliser la dépendance entre les variables fonctionnelles et de prendre en compte le lien entre ces variables et une autre variable, appelée covariable, qui peut être, par exemple, la sortie du code étudié. Associée à cette méthodologie, nous avons développé une adaptation d'un outil de visualisation de données fonctionnelles, permettant de visualiser simultanément les incertitudes et les caractéristiques de plusieurs variables fonctionnelles dépendantes. Dans un second temps, une méthodologie pour réaliser l'analyse de sensibilité globale des simulateurs des deux cas d'étude a été proposée. Dans le cas d'un code coûteux en temps de calcul, l'application directe des méthodes d'analyse de sensibilité globale quantitative est impossible. Pour pallier ce problème, la solution retenue consiste à construire un modèle de substitution ou métamodèle, approchant le code de calcul et ayant un temps de calcul très court. Une méthode d'échantillonnage uniforme optimisé pour des variables scalaires et fonctionnelles a été développée pour construire la base d'apprentissage du métamodèle. Enfin, une nouvelle approche d'approximation de codes coûteux et à entrées fonctionnelles a été explorée. Dans cette approche, le code est vu comme un code stochastique dont l'aléa est dû aux variables fonctionnelles supposées incontrôlables. Sous ces hypothèses, plusieurs métamodèles ont été développés et comparés. L'ensemble des méthodes proposées dans ces travaux a été appliqué aux deux cas d'application étudiés. / This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called covariate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or metamodel, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the metamodel. Finally, a new approximation approach for expensive codes with functional outputs has been explored. In this approach, the code is seen as a stochastic code, whose randomness is due to the functional variables, assumed uncontrollable. In this framework, several metamodels have been developed and compared. All the methods proposed in this work have been applied to the two nuclear safety applications.
|
240 |
Estimação de rigidezes de mancais de rotores por análise de sensibilidadeCaldiron, Leonardo [UNESP] 30 September 2004 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:27:14Z (GMT). No. of bitstreams: 0
Previous issue date: 2004-09-30Bitstream added on 2014-06-13T18:31:09Z : No. of bitstreams: 1
caldiron_l_me_ilha_prot.pdf: 995296 bytes, checksum: dc02ada2316a49ff3c8fd3dca71338ba (MD5) / Neste trabalho são otimizadas rotinas computacionais de um método de estimação de rigidez de mancais de máquinas através de um processo de ajuste de modelo, utilizando a análise de sensibilidade. Este método consiste em utilizar a análise de sensibilidade dos autovalores com relação à variação da rigidez dos mancais de um rotor. A eficácia e a robustez do método são analisadas através de simulações teóricas, bem como através de dados experimentais obtidos de um rotor de rotação variável e rigidezes dos mancais ajustáveis. O modelo matemático de ajuste do sistema é desenvolvido pelo método dos elementos finitos e o método de ajuste converge empregando-se um processo iterativo. Este método de ajuste baseia-se na minimização da diferença entre autovalores experimentais e autovalores obtidos com o modelo matemático de ajuste a partir de valores de rigidez dos mancais previamente adotados. A análise é feita com o rotor em diversas velocidades de rotação para verificar a influência do efeito giroscópio, e em diversas condições de valores da rigidez dos mancais para analisar o método quando aplicado em rotores flexíveis e em rotores rígidos. O desempenho do método é analisado com resultados teóricos e experimentais. / In this work, computational routines of estimation method of stiffness bearing of machine via a model updating process are optimized, using the sensitivity analysis. This method consists of using the eigenvalue sensitivity analysis, relating to the stiffness bearing variation of a rotor. The efficacy and the robustness of the method are analyzed through the theoretical simulations, as well as, based on the experimental data obtained of a test rotor with variable rotating speeds and adjustable bearing stiffness values. The mathematical model system is developed by the finite element method and the method of adjustment should converge employing an iterative process. The method of adjustment is based on the minimization of the difference between experimental eigenvalues and eigenvalues obtained via mathematical model from previously adopted stiffness bearing values. The analysis is made by using the rotor in different rotating speeds in order to check the influence of the gyroscopic effect, and in several conditions of the stiffness bearing values to analyze the method when applied on flexible and rigid rotors. The performance of the method is analyzed through theoretical and experimental results.
|
Page generated in 0.0882 seconds