Spelling suggestions: "subject:"ensitivity 2analysis"" "subject:"ensitivity 3analysis""
511 |
Calibration Bayésienne d'un modèle d'étude d'écosystème prairial : outils et applications à l'échelle de l'Europe / no title availableBen Touhami, Haythem 07 March 2014 (has links)
Les prairies représentent 45% de la surface agricole en France et 40% en Europe, ce qui montre qu’il s’agit d’un secteur important particulièrement dans un contexte de changement climatique où les prairies contribuent d’un côté aux émissions de gaz à effet de serre et en sont impactées de l’autre côté. L’enjeu de cette thèse a été de contribuer à l’évaluation des incertitudes dans les sorties de modèles de simulation de prairies (et utilisés dans les études d’impact aux changements climatiques) dépendant du paramétrage du modèle. Nous avons fait appel aux méthodes de la statistique Bayésienne, basées sur le théorème de Bayes, afin de calibrer les paramètres d’un modèle référent et améliorer ainsi ses résultats en réduisant l’incertitude liée à ses paramètres et, par conséquent, à ses sorties. Notre démarche s’est basée essentiellement sur l’utilisation du modèle d’écosystème prairial PaSim, déjà utilisé dans plusieurs projets européens pour simuler l’impact des changements climatiques sur les prairies. L’originalité de notre travail de thèse a été d’adapter la méthode Bayésienne à un modèle d’écosystème complexe comme PaSim (appliqué dans un contexte de climat altéré et à l’échelle du territoire européen) et de montrer ses avantages potentiels dans la réduction d’incertitudes et l’amélioration des résultats, en combinant notamment méthodes statistiques (technique Bayésienne et analyse de sensibilité avec la méthode de Morris) et outils informatiques (couplage code R-PaSim et utilisation d’un cluster de calcul). Cela nous a conduit à produire d’abord un nouveau paramétrage pour des sites prairiaux soumis à des conditions de sécheresse, et ensuite à un paramétrage commun pour les prairies européennes. Nous avons également fourni un outil informatique de calibration générique pouvant être réutilisé avec d’autres modèles et sur d’autres sites. Enfin, nous avons évalué la performance du modèle calibré par le biais de la technique Bayésienne sur des sites de validation, et dont les résultats ont confirmé l’efficacité de cette technique pour la réduction d’incertitude et l’amélioration de la fiabilité des sorties. / Grasslands cover 45% of the agricultural area in France and 40% in Europe. Grassland ecosystems have a central role in the climate change context, not only because they are impacted by climate changes but also because grasslands contribute to greenhouse gas emissions. The aim of this thesis was to contribute to the assessment of uncertainties in the outputs of grassland simulation models, which are used in impact studies, with focus on model parameterization. In particular, we used the Bayesian statistical method, based on Bayes’ theorem, to calibrate the parameters of a reference model, and thus improve performance by reducing the uncertainty in the parameters and, consequently, in the outputs provided by models. Our approach is essentially based on the use of the grassland ecosystem model PaSim (Pasture Simulation model) already applied in a variety of international projects to simulate the impact of climate changes on grassland systems. The originality of this thesis was to adapt the Bayesian method to a complex ecosystem model such as PaSim (applied in the context of altered climate and across the European territory) and show its potential benefits in reducing uncertainty and improving the quality of model outputs. This was obtained by combining statistical methods (Bayesian techniques and sensitivity analysis with the method of Morris) and computing tools (R code -PaSim coupling and use of cluster computing resources). We have first produced a new parameterization for grassland sites under drought conditions, and then a common parameterization for European grasslands. We have also provided a generic software tool for calibration for reuse with other models and sites. Finally, we have evaluated the performance of the calibrated model through the Bayesian technique against data from validation sites. The results have confirmed the efficiency of this technique for reducing uncertainty and improving the reliability of simulation outputs.
|
512 |
Méthodes de méta-analyse pour l’estimation des émissions de N2O par les sols agricoles / Meta-analysis methods to estimate N2O emissions from agricultural soils.Philibert, Aurore 16 November 2012 (has links)
Le terme de méta-analyse désigne l'analyse statique d'un large ensemble de résultats provenant d'études individuelles pour un même sujet donné. Cette approche est de plus en plus étudiée dans différents domaines, notamment en agronomie. Dans cette discipline, une revue bibliographique réalisée dans le cadre de la thèse a cependant montré que les méta-analyses n'étaient pas toujours de bonne qualité. Les méta-analyses effectuées en agronomie étudient ainsi très rarement la robustesse de leurs conclusions aux données utilisées et aux méthodes statistiques. L'objectif de cette thèse est de démontrer et d'illustrer l'importance des analyses de sensibilité dans le cadre de la méta-analyse en s'appuyant sur l'exemple de l'estimation des émissions de N2O provenant des sols agricoles. L'estimation des émissions de protoxyde d'azote (N2O) est réalisée à l'échelle mondaile par le Groupe d'experts intergouvernemental sur l'évolution du climat (GIEC). Le N2O est un puissant gaz à effet de serre avec un pouvoir de réchauffement 298 fois plus puissant que le CO2 sur une période de 100 ans. Les émissions de N2O ont la particularité de présenter une forte variabilité spatiale et temporelle. Deux bases de données sont utilisées dans ce travail : la base de données de Rochette et Janzen (2005) et celle de Stehfest et Bouwman (2006). Elles recensent de nombreuses mesures d'émissions de N2O réparties dans le monde provenant d'études publiées et ont joué un rôle important lors des estimations d'émissions de N2O réalisées par le GIEC. Les résultats montrent l'intérêt des modèles à effets aléatoires pour estimer les émissions de NO2 issues de sols agricoles. Ils sont bien adaptés à la structure des données (observations répétées sur un même site pour différentes doses d'engrais, avec plusieurs sites considérés). Ils permettent de distinguer la variabilité inter-sites de la variabilité intra-site et d'estimer l'effet de la dose d'engrais azoté sur les émissions de NO2. Dans ce mémoire, l'analyse de la sensibilité des estimations à la forme de la relation "Emission de N2O / Dose d'engrais azoté" a montré qu'une relation exponentielle était plus adaptée. Il apparait ainsi souhaitable de remplacer le facteur d'émission constant du GIEC (1% d'émission quelque soit la dose d'engrais azoté) par un facteur variable qui augmenterait en fonction de la dose. Nous n'avons par contre pas identifié de différence importante entre les méthodes d'inférence fréquentiste et bayésienne. Deux approches ont été proposées pour inclure des variables de milieu et de pratiques culturales dans les estimations de N2O. La méthode Random Forest permet de gérer les données manquantes et présente les meilleures prédictions d'émission de N2O. Les modèles à effets aléatoires permettent eux de prendre en compte ces variables explicatives par le biais d'une ou plusieurs mesures d'émission de N2O. Cette méthode permet de prédire les émissions de N2O pour des doses non testées comme le cas non fertilisé en parcelles agricoles. Les résultats de cette méthode sont cependant sensibles au plan d'expérience utilisé localement pour mesurer les émissions de N2O. / The term meta-analysis refers to the statistical analysis of a large set of results coming from individual studies about the same topic. This approach is increasingly used in various areas, including agronomy. In this domain however, a bibliographic review conducted by this thesis, showed that meta-analyses were not always of good quality. Meta-analyses in agronomy very seldom study the robustness of their findings relative to data quality and statistical methods.The objective of this thesis is to demonstrate and illustrate the importance of sensitivity analysis in the context of meta-analysis and as an example this is based on the estimation of N2O emissions from agricultural soils. The estimation of emissions of nitrous oxide (N2O) is made at the worldwide level by the Intergovernmental Panel on Climate Change (IPCC). N2O is a potent greenhouse gas with a global warming power 298 times greater than the one of CO2 over a 100 year period. The key characteristics of N2O emissions are a significant spatial and time variability. Two databases are used for this work: the database of Rochette and Janzen (2005) and the one of Stehfest and Bouwman (2006). They collect numerous worldwide N2O emissions measurements from published studies and have played a significant role in the estimation of N2O emissions produced by the IPCC. The results show the value of random effects models in order to estimate N2O emissions from agricultural soils. They are well suited to the structure of the data (repeated observations on the same site for different doses of fertilizers, with several sites considered). They allow to differentiate the inter-site and intra-site variability and to estimate the effect of the rate of nitrogen fertilize on the N2O emissions. In this paper, the analysis of the sensitivity of the estimations to the shape of the relationship "Emission of N2O / N fertilizer dose" has shown that an exponential relationship would be the most appropriate. Therefore it would be appropriate to replace the constant emission factor of the IPCC (1% emission whatever the dose of nitrogen fertilizer) by a variable factor which would increase with the dose. On the other hand we did not identify significant differences between frequentist and Bayesian inference methods. Two approaches have been proposed to include environmental variables and cropping practices in the estimates of N2O. The first one using the Random Forest method allows managing missing data and provides the best N2O emissions predictions. The other one, based on random effects models allow to take into account these explanatory variables via one or several measurements of N2O. They allow predicting N2O emissions for non-tested doses in unfertilized farmer's field. However their results are sensitive to the experimental design used locally to measure N2O emissions.
|
513 |
Análise de sensibilidade e resíduos em modelos de regressão com respostas bivariadas por meio de cópulas / Bivariate response regression models with copulas: Sensitivity and residual analysisGomes, Eduardo Monteiro de Castro 01 February 2008 (has links)
Neste trabalho são apresentados modelos de regressão com respostas bivariadas obtidos através de funções cópulas. O objetivo de utilizar estes modelos bivariados é modelar a correlação entre eventos e captar nos modelos de regressão a influência da associação entre as variáveis resposta na presença de censura nos dados. Os parâmetros dos modelos, são estimados por meio dos métodos de máxima verossimilhança e jackknife. Alguns métodos de análise de sensibilidade como influência global, local e local total de um indivíduo, são introduzidos e calculados considerando diferentes esquemas de perturbação. Uma análise de resíduos foi proposta para verificar a qualidade do ajuste dos modelos utilizados e também foi proposta novas medidas de resíduos para respostas bivariadas. Métodos de simulação de Monte Carlo foram conduzidos para estudar a distribuição empírica dos resíduos marginais e bivariados propostos. Finalmente, os resultados são aplicados à dois conjuntos de dados dsponíveis na literatura. / In this work bivariate response regression models are presented with the use of copulas. The objective of this approach is to model the correlation between events and capture the influence of this correlation in the regression parameters. The models are used in the context of survival analysis and are ¯tted to two data sets available in the literature. Inferences are obtained using maximum likelihood and Jackknife methods. Sensitivity techniques such as local and global in°uence are proposed and calculated. A residual analysis is proposed to check the adequacy of the models and simulation methods are used to asses the empirical distribution of the marginal univariate and bivariate residual measures proposed.
|
514 |
Microgeração fotovoltaica no Brasil: condições atuais e perspectivas futuras / Photovoltaic microgeneration in Brazil: current conditions and future prospectsNakabayashi, Rennyo Kunizo 15 December 2014 (has links)
A atratividade econômica da micro e minigeração está intrinsecamente relacionada às tarifas de energia elétrica convencional, já que o benefício, do ponto de vista financeiro, para o micro/minigerador é o custo evitado para a compra de energia elétrica convencional. Desta forma, realizou-se a avaliação econômico-financeira de sistemas fotovoltaicos de geração distribuída sob a ótica do consumidor residencial. A análise foi realizada para as 27 capitais brasileiras e incluiu estimativas relacionadas às seguintes figuras de mérito: Valor Presente Líquido (VPL), Taxa Interna de Retorno (TIR) e Payback (tempo de retorno sobre o investimento). Foi realizada uma análise de sensibilidade e uma projeção dos resultados para o ano de 2020, utilizando Simulação de Monte Carlo. Para o ano de 2015, a expectativa é que na maioria das capitais brasileiras já existam condições favoráveis para a micro/minigeração com sistemas fotovoltaicos, dados os reajustes tarifários de energia elétrica aprovados em 2014. Observou-se que, dependendo da diferença entre as tarifas com e sem impostos, o percentual de autoconsumo pode exercer grande influência sobre a atratividade financeira na microgeração. Em 2020, espera-se que a probabilidade de viabilidade da microgeração fotovoltaica, ultrapasse os 90%, enquanto que, em 2015, a probabilidade de viabilidade para as 27 capitais brasileiras está próxima de 62%. / The economic attractiveness of micro and minigeneration is intrinsically related to conventional electricity price, as the benefit from the financial side, to the micro/minigerador is the avoided cost with the energy purchase. In this way, an economic assessment of photovoltaic distributed generation was performed (from the perspective of the residential consumer). The analysis was made for the 27 brazilian capitals and the following results are presented: Net Present Value (NPV), Internal Rate of Return (IRR) and Payback (return time on investment). A sensitivity analysis was made, besides a probabilistic approach for the year 2020, using Monte Carlo Simulation. For 2015, the expectation is that in most brazilian capitals already exist favorable conditions for photovoltaic micro/minigenerators, mainly because the electricity tariffs readjustments approved in 2014. It was also observed that, depending on the difference between the prices with and without taxes, self-consumption percentage could greatly influence the financial attractiveness of microgeneration. By 2020, it is expected that the probability of viability for photovoltaic microgeneration, could be over than 90% in the brazilian capitals, while in 2015, the probability of viability for the 27 capital cities is near 62%.
|
515 |
Estudo qualitativo de um modelo de propagação de dengue / Qualitative study of a dengue disease transmission modelSantos, Bruna Cassol dos 25 July 2016 (has links)
Em epidemiologia matemática, muitos modelos de propagação de doenças infecciosas em populações têm sido analisados matematicamente e aplicados para doenças específicas. Neste trabalho um modelo de propagação de dengue é analisado considerando-se diferentes hipóteses sobre o tamanho da população humana. Mais precisamente, estamos interessados em verificar o impacto das variações populacionais a longo prazo no cálculo do parâmetro Ro e no equilíbrio endêmico. Vamos discutir algumas ideias que nortearam o processo de definição do parâmetro Ro a partir da construção do Operador de Próxima Geração. Através de um estudo qualitativo do modelo matemático, obtivemos que o equilíbrio livre de doença é globalmente assintoticamente estável se Ro é menor ou igual a 1 e instável se Ro>1. Para Ro>1, a estabilidade global do equilíbrio endêmico é provada usando um critério geral para estabilidade orbital de órbitas periódicas associadas a sistemas autônomos não lineares de altas ordens e resultados da teoria de sistemas competitivos para equações diferenciais ordinárias. Também foi desenvolvida uma análise de sensibilidade do Ro e do equilíbrio endêmico com relação aos parâmetros do modelo de propagação. Diversos cenários foram simulados a partir dos índices de sensibilidade obtidos nesta análise. Os resultados demonstram que, de forma geral, o parâmetro Ro e o equilíbrio endêmico apresentam considerável sensibilidade a taxa de picadas do vetor e a taxa de mortalidade do vetor. / In mathematical epidemiology many models of spread of infectious diseases in populations have been analyzed mathematically and applied to specific diseases. In this work a dengue propagation model is analyzed considering different assumptions about the size of the human population. More precisely, we are interested to verify the impact of population long-term variations in the calculation of the parameter Ro and endemic equilibrium. We will discuss some ideas that guided the parameter setting process Ro from the construction of the Next Generation Operator. Through a qualitative study of the mathematical model, we found that the disease-free equilibrium is globally asymptotically stable if Ro is less or equal than 1 and unstable if Ro> 1. For Ro> 1 the global stability of the endemic equilibrium is proved using a general criterion for orbital stability of periodic orbits associated with nonlinear autonomous systems of higher orders and results of the theory of competitive systems for ordinary differential equations. Also a sensitivity analysis of the Ro and the endemic equilibrium with respect to the parameters of the propagation model was developed. Several scenarios were simulated from the sensitivity index obtained in this analysis. The results demonstrate that in general the parameter Ro and the endemic equilibrium are the most sensitive to the vector biting rate and the vector mortality rate.
|
516 |
Contribuição para a teoria termodinamicamente consistente da fratura / Contribution to the thermodynamically consistent theory of fractureRocha, João Augusto de Lima 12 March 1999 (has links)
Como ponto de partida para a formulação da teoria termodinamicamente consistente da fratura, parte-se das cinco equações globais do balanço termomecânico (massa, momentum linear, momentum angular, energia e entropia), aplicadas ao caso de um sólido dentro do qual superfícies internas regulares podem evoluir, continuamente, com o processo de deformação, simulando fissuras. Faz-se a passagem das equações globais às correspondentes equações locais de balanço, inclusive nos pontos das superfícies de avanço das fissuras, e chega-se ao critério termodinâmico geral de fratura. Fazendo-se uso da noção de energia livre de Helmholtz, particulariza-se o critério para o caso isotérmico. Na seqüência, contando-se com o auxílio da Análise de Sensibilidade à variação de forma, da Otimização Estrutural, aplicada ao caso da fratura, obtém-se o parâmetro termodinâmico de fratura, válido para uma parte arbitrária do sólido contendo uma fissura. Assim, o problema fica reduzido à obtenção do valor de uma integral sobre a fronteira da parte do sólido considerada. O Método dos Elementos de Contorno é utilizado para a obtenção de resultados aproximados desse parâmetro, que é alternativo à integral J de Rice. Conclui-se, com uma proposta de experimento de laboratório, acoplado a um experimento numérico, para o caso de um problema bidimensional. A partir da comparação entre resultados do experimento de laboratório e do correspondente experimento numérico, sugere-se que será possível a calibração de parâmetros associados ao comportamento não linear do material nas proximidades da extremidade de uma fissura. / The construction of a thermodynamically consistent theory of fracture, is here proposed assuming that the five global equations of the thermomechanical balance (mass, linear momentum, angular momentum, energy and entropy), of Continuum Mechanics, are valid in the case of a solid containing flaws, simulating initial cracks. Considering the possibility of crack advances, the passage from global equations to local ones conducted to local balance, also for points taken over the crack advancing surfaces. As consequence, a general thermodynamic fracture criterion is obtained. Then, using the concept of Helmholtz Free Energy, this fracture criterion is particularised to the isothermal case. The Shape Sensitivity Analysis, used as a tool of Fracture Mechanics, conducted to a fracture thermodynamic parameter Gt, whose physical meaning is analogous to the Griffith\'s energy release rate (or the Rice\'s J integral) but that parameter is based on the strain energy instead potential total energy. The Boundary Element Method is used in the construction of a strategy of coupling numerical and experimental tests, viewing the construction of a particular fracture criterion, valid to plane problems. In conclusion, one proposes that this numerical and experimental coupling be adopted for calibration of non-linear models of material behaviours, valid in the neighbouring of crack\'s onset.
|
517 |
Simulation and optimization of primary oil and gas processing plant of FPSO operating in pre-salt oil field. / Simulação e otimização de planta de processamento primário de óleo e gás de FPSO operando em campo de petróleo do pré-sal.Bidgoli, Ali Allahyarzadeh 11 September 2018 (has links)
FPSO (Floating, Production, Storage e Offloading) plants, similarly to other oil and gas offshore processing plants, are known to be an energy-intensive process. Thus, any energy consumption and production optimization procedures can be applied to find optimum operating conditions of the unit, saving money and CO2 emissions from oil and gas processing companies. A primary processing plant of a typical FPSO operating in a Brazilian deep-water oil field on pre-salt areas is modeled and simulated using its real operating data. Three operation conditions of the oil field are presented in this research: (i) Maximum oil/gas content (mode 1), (ii) 50% BSW oil content (mode 2) and (iii) high water/CO2 in oil content (mode 3). In addition, an aero-derivative gas turbine (RB211G62 DLE 60Hz) with offshore application is considered for the heat and generation unit using the real performance data. The impact of eight thermodynamic input parameters on fuel consumption and hydrocarbon liquids recovery of the FPSO unit are investigated by the Smoothing Spline ANOVA (SS-ANOVA) method. From SS-ANOVA, the input parameters that presented the highest impact on fuel consumption and hydrocarbon liquids recovery were selected for an optimization procedure. The software Aspen HYSYS is used as the process simulator for the screening analysis process and for the optimization procedure, that consisted of a Hybrid Algorithm (NSGA-II +SQP method). The objective functions used in the optimization were the minimization of fuel consumption of the processing and utility plants and the maximization of hydrocarbon liquids recovery. From SS-ANOVA, the statistical analysis revealed that the most important parameters affecting the fuel consumption of the plant are: (1) output pressure of the first control valve (P1); (2) output pressure of the second stage of the separation train before mixing with dilution water (P2); (3) input pressure of the third stage of separation train (P3); (4) input pressure of dilution water (P4); (5) output pressure of the main gas compressor (Pc); (6) output petroleum temperature in the first heat exchanger (T1); (7) output petroleum temperature in the second heat exchanger (T2); (8) and dilution water temperature (T3). Four input parameters (P1, P2, P3 and Pc), three input parameters (P3, Pc and T2) and three input parameters (P3, Pc and T2) correspond to 96%, 97% and 97% of the total contribution to fuel consumption for modes 1, 2 and 3, respectively. For hydrocarbon liquids recovery of the plant: Four input parameters (P1,P2,P3 and T2), three input parameters (P3, P2 and T2) and three input parameters (P3, P2 and T2) correspond to 95%, 97% and 98% of the total contribution to hydrocarbon liquids recovery for modes 1, 2 and 3, respectively. The results from the optimized case indicated that the minimization of fuel consumption is achieved by increasing the operating pressure in the third stage of the separation train and by decreasing the operating temperature in the second stage of the separation train for all operation modes. There were a reduction in power demand of 6.4% for mode 1, 10% for mode 2 and 2.9% for mode 3, in comparison to the baseline case. Consequently, the fuel consumption of the plant was decreased by 4.46% for mode 1, 8.34% for mode 2 and 2.43% for mode 3 , when compared to the baseline case. Moreover, the optimization found an improvement in the recovery of the volatile components, in comparison with the baseline cases. Furthermore, the optimum operating condition found by the optimization procedure of hydrocarbon liquids recovery presented an increase of 4.36% for mode 1, 3.79% for mode 2 and 1.75% for mode 3 in hydrocarbon liquids recovery (stabilization and saving), when compared to a conventional operating condition of their baseline. / As plantas FPSO (Floating, Production, Storage e Offloading) , assim como outras plataformas de processamento offshore de petróleo e gás, são conhecidas por terem processos com uso intensivo de energia. Portanto, qualquer aplicação de procedimentos de otimização para consumo de energia e/ou produção pode ser útil para encontrar as melhores condições de operação da unidade, reduzindo custos e emissões de CO2 de empresas que atuam na área de petróleo e gás. Uma planta de processamento primário de uma plataforma FPSO típica, operando em um campo de petróleo em águas profundas brasileiras e em áreas do pré-sal, é modelada e simulada usando seus dados operacionais reais: (i) Teor máximo de óleo / gás (modo 1), (ii) 50 % de teor de BSW no óleo (modo 2) e (iii) teor elevado de água / CO2 no óleo (modo 3). Além disso, uma turbina a gás aeroderivativa (RB211G62 DLE 60Hz) para aplicação offshore é considerada para a unidade de geração da potência eletrica e calor, através dos seus dados reais de desempenho. O impacto de oito parâmetros termodinâmicos de entrada no consumo de combustível e na recuperação de hidrocarbonetos líquidos da unidade FPSO são investigados pelo método SS-ANOVA (Smoothing Spline ANOVA). A partir do SS-ANOVA, os parâmetros de entrada que apresentaram o maior impacto no consumo de combustível e na recuperação de hidrocarbonetos líquidos foram selecionados para aplicação em um procedimento de otimização. Os processos de análise da triagem (usando SS-ANOVA) e de otimização, que consiste em um Algoritmo Híbrido (método NSGA-II + SQP), utilizaram o software Aspen HYSYS como simulador de processo. As funções objetivo utilizadas na otimização foram: minimização do consumo de combustível das plantas de processamento e utilidade e a maximização da recuperação de hidrocarbonetos líquidos. Ainda utilizando SS-ANOVA, a análise estatística realizada revelou que os parâmetros mais importantes que afetam o consumo de combustível da planta são: (1) pressão de saída da primeira válvula de controle (P1); (2) pressão de saída do segundo estágio do trem de separação (e antes da mistura com água de diluição) (P2); (3) pressão de entrada do terceiro estágio do trem de separação (P3); (4) pressão de entrada da água de diluição (P4); (5) pressão de saída do compressor principal de gás (Pc); temperatura de saída de petróleo no primeiro trocador de calor (T1); (7) temperatura de saída de petróleo no segundo trocador de calor (T2); e (8) temperatura da água de diluição. Os parâmetros de entrada de P1, P2, P3 e Pc correspondem a 95% da contribuição total para a recuperação de hidrocarbonetos líquidos da planta para os modos 1. Analogamente, os três parâmetros de entrada P3, Pc e T2 correspondem a 97% e 98% do contribuição total para o consumo de combustível para os modos 2 e 3, respectivamente. Para a recuperação de hidrocarbonetos líquidos da plant, os parâmetros de entrada de P1, P2, P3 e T2 correspondem a 96% da contribuição total para o consumo de combustível para o modo 1. Da mesma forma, os três parâmetros de entrada P3, P2 e T2 correspondem a 97% e 97% da contribuição total para a recuperação de hidrocarbonetos líquidos para os modos 2 e 3, respectivamente. Os resultados do caso otimizado indicaram que a minimização do consumo de combustível é obtida aumentando a pressão de operação no terceiro estágio do trem de separação e diminuindo a temperatura de operação no segundo estágio do trem de separação para todos os modos de operação. Houve uma redução na demanda de potência de 6,4% para o modo 1, 10% para o modo 2 e 2,9% para o modo 3, em comparação com o caso base. Consequentemente, o consumo de combustível da planta foi reduzido em 4,46% para o modo 1, 8,34% para o modo 2 e 2,43% para o modo 3, quando comparado com o caso base. Além disso, o procedimento de otimização identificou uma melhora na recuperação dos componentes voláteis, em comparação com os casos baseline. A condição ótima de operação encontrada pelo procedimento para otimização da recuperação de hidrocarbonetos líquidos apresentou um aumento de 4,36% para o modo 1, 3,79% para o modo 2 e 1,75% para modo 3, na recuperação líquida de hidrocarbonetos líquidos (e estabilização), quando comparado com as condições operacionais convencionais das suas baseline.
|
518 |
Functional analytic approaches to some stochastic optimization problemsBackhoff, Julio Daniel 17 February 2015 (has links)
In dieser Arbeit beschäftigen wir uns mit Nutzenoptimierungs- und stochastischen Kontrollproblemen unter mehreren Gesichtspunkten. Wir untersuchen die Parameterunsicherheit solcher Probleme im Sinne des Robustheits- und des Sensitivitätsparadigma. Neben der Betrachtung dieser problemen widmen wir uns auch einem Zweiagentenproblem, bei dem der eine dem anderen das Management seines Portfolios vertraglich überträgt. Wir betrachten das robuste Nutzenoptimierungsproblem in Finanzmarktmodellen, wobei wir Bedingungen für seine Lösbarkeit formulieren, ohne jegliche Kompaktheit der Unsicherheitsmenge zu fordern, welche die Maße enthält, auf die der Optimierer robustifiziert. Unsere Bedingungen sind über gewisse Funktionenräume beschrieben, die allgemein Modularräume sind, mittels dennen wir eine Min-Max-Gleichung und die Existenz optimalen Strategien beweisen. In vollständigen Märkten ist der Raum ein Orlicz, und nachdem man seine Reflexivität explizit überprüft hat, erhält man zusätzlich die Existenz einer Worst-Case-Maße, die wir charakterisieren. Für die Parameterabhängigkeit stochastischer Kontrollprobleme entwickeln wir einen Sensitivitätsansatz. Das Kernargument ist die Korrespondenz zwischen dem adjungierten Zustand zur schwachen Formulierung des Pontryaginschen Prinzips und den Lagrange-Multiplikatoren, die der Kontrollgleichung assoziiert werden, wenn man sie als eine Bedingung betrachtet. Der Sensitivitätsansatz wird dann auf konvexe Probleme mit additiver oder multiplikativer Störung angewendet. Das Zweiagentenproblem formulieren wir in diskreter Zeit. Wir wenden in größter Verallgemeinerung die Methoden der bedingten Analysis auf den Fall linearer Verträge an und zeigen, dass sich die Mehrheit der in der Literatur unter sehr spezifischen Annahmen bekannten Ergebnisse auf eine deutlich umfassenderer Klasse von Modellen verallgemeinern lässt. Insbesondere erhalten wir die Existenz eines first-best-optimalen Vertrags und dessen Implementierbarkeit. / In this thesis we deal with utility maximization and stochastic optimal control through several points of view. We shall be interested in understanding how such problems behave under parameter uncertainty under respectively the robustness and the sensitivity paradigms. Afterwards, we leave the single-agent world and tackle a two-agent problem where the first one delegates her investments to the second through a contract. First, we consider the robust utility maximization problem in financial market models, where we formulate conditions for its solvability without assuming compactness of the densities of the uncertainty set, which is a set of measures upon which the maximizing agent performs robust investments. These conditions are stated in terms of functional spaces wich generally correspond to Modular spaces, through which we prove a minimax equality and the existence of optimal strategies. In complete markets the space is an Orlicz one, and upon explicitly granting its reflexivity we obtain in addition the existence of a worst-case measure, which we fully characterize. Secondly we turn our attention to stochastic optimal control, where we provide a sensitivity analysis to some parameterized variants of such problems. The main tool is the correspondence between the adjoint states appearing in a (weak) stochastic Pontryagin principle and the Lagrange multipliers associated to the controlled equation when viewed as a constraint. The sensitivity analysis is then deployed in the case of convex problems and additive or multiplicative perturbations. In a final part, we proceed to Principal-Agent problems in discrete time. Here we apply in great generality the tools from conditional analysis to the case of linear contracts and show that most results known in the literature for very specific instances of the problem carry on to a much broader setting. In particular, the existence of a first-best optimal contract and its implementability by the Agent is obtained.
|
519 |
In the Wake of the Financial Crisis - Regulators’ and Investors’ PerspectivesPang, Weijie 23 April 2019 (has links)
Before the 2008 financial crisis, most research in financial mathematics focused on the risk management and the pricing of options without considering effects of counterparties’ default, illiquidity problems, systemic risk and the role of the repurchase agreement (Repo). During the 2008 financial crisis, a frozen Repo market led to a shutdown of short sales in the stock market. Cyclical interdependencies among financial corporations caused that a default of one firm seriously affected other firms and even the whole financial network. In this dissertation, we will consider financial markets which are shaped by financial crisis. This will be done from two distinct perspectives, an investor’s and a regulator’s. From an investor’s perspective, recently models were proposed to compute the total valuation adjustment (XVA) of derivatives without considering a potential crisis in the market. In our research, we include a possible crisis by apply an alternating renewal process to describe a switching between a normal financial status and a financial crisis status. We develop a framework for pricing the XVA of a European claim in this state-dependent framework. We represent the price as a solution to a backward stochastic differential equation and prove the existence and uniqueness of the solution. To study financial networks from a regulator’s perspective, one popular method is the fixed point based approach by L. Eisenberg and T. Noe. However, in practice, there is no accurate record of the interbank liabilities and thus one has to estimate them to use Eisenberg - Noe type models. In our research, we conduct a sensitivity analysis of the Eisenberg - Noe framework, and quantify the effect of the estimation errors to the clearing payments. We show that the effect of the missing specification of interbank connection to clearing payments can be described via directional derivatives that can be represented as solutions of fixed point equations. We also compute the probability of observing clearing payment deviations of a certain magnitude.
|
520 |
Analyses de sensibilité et d'identifiabilité globales : application à l'estimation de paramètres photophysiques en thérapie photodynamique / Global sensitivity and identifiability analyses : application to the estimation of the photophysical parameters in photodynamic therapyDobre, Simona 22 June 2010 (has links)
La thérapie photodynamique (PDT) est un traitement médical destiné à certains types de cancer. Elle utilise un agent photosensibilisant qui se concentre dans les tissus pathologiques est qui sera. Cet agent est ensuite activé par une lumière d'une longueur d'onde précise produisant, après une cascade de réactions, des espèces réactives de l'oxygène qui endommagent les cellules cancéreuses.Cette thèse aborde les analyses d'identifiabilité et de sensibilité des paramètres du modèle dynamique non linéaire retenu.Après avoir précisé différents cadres d'analyse d'identifiabilité, nous nous intéressons plus particulièrement à l'identifiabilité a posteriori, pour des conditions expérimentales fixées, puis à l'identifiabilité pratique, prenant en plus en compte les bruits de mesure. Pour ce dernier cadre, nous proposons une méthodologie d'analyse locale autour de valeurs particulières des paramètres. En ce qui concerne l'identifiabilité des paramètres du modèle dynamique de la phase photocytotoxique de la PDT, nous montrons que parmi les dix paramètres localement identifiables a posteriori, seulement l'un d'entre eux l'est en pratique. Néanmoins, ces résultats locaux demeurent insuffisants en raison des larges plages de variation possibles des paramètres du modèle et nécessitent d'être complétés par une analyse globale.Le manque de méthode visant à tester l'identifiabilité globale a posteriori ou pratique, nous a orientés vers l'analyse de sensibilité globale de la sortie du modèle par rapport à ses paramètres. Une méthode d'analyse de sensibilité globale fondée sur l'étude de la variance a permis de mettre en évidence trois paramètres sensibilisants.Nous abordons ensuite les liens entre les analyses globales d'identifiabilité et de sensibilité des paramètres, en employant une décomposition de Sobol'. Nous montrons alors que les liens suivants existent : une fonction de sensibilité totale nulle implique un paramètre non-identifiable; deux fonctions de sensibilité colinéaires impliquent la non-identifiabilité mutuelle des paramètres en question ; la non-injectivité de la sortie par rapport à un de ses paramètres peut aussi entrainer la non-identifiabilité du paramètre en question mais ce dernier point ne peut être détecté en analysant les fonctions de sensibilité uniquement. En somme, la détection des paramètres non globalement identifiables dans un cadre expérimental donné à partir de résultats d'analyse de sensibilité globale ne peut être que partielle. Elle permet d'observer deux (sensibilité nulle ou négligeable et sensibilités corrélées) des trois causes de la non-identifiabilité / Photodynamic therapy (PDT) is a treatment of dysplastic tissues such as cancers. Mainly, it involves the selective uptake and retention of the photosensitizing drug (photosensitizer, PS) in the tumor, followed by its illumination with light of appropriate wavelength. The PS activation is thought to produce, after multiple intermediate reactions, singlet oxygen at high doses (in the presence of molecular oxygen) and thereby to initiate apoptotic and necrotic death of tumor. The PDT efficiency stems from the optimal interaction between these three factors: photosensitizing agent (its chemical and photobiological properties), light (illumination conditions) and oxygen (its availability in the target tissue). The relative contribution of each of these factors has an impact on the effectiveness of treatment. It is a dynamic process and the objective of the thesis is to characterize it by a mathematical model. The points raised relate primarily to the determination of a dynamic model of the photodynamic phase (production of singulet oxygen), to the global analyses of identifiability and sensitivity of the parameters of the model thus built. The main difficulties of this work are the nonlinearity structure of the photophysical model, the large range of possible values (up to four decades) of the unknown parameters, the lack of information (only one measured variable over six state variables), the limited degrees-of-freedom for the choice of the laser light stimulus (input variable).Another issue concerns the links between the non-identifiability of parameters and the properties of global sensitivity functions. Two relationships between these two concepts are presented. We stress the need to remain cautious about the parameter identifiability conclusions based on the sensitivity study. In perspective, these results could lead to the development of new approaches to test the non-identifiability of parameters in an experimental framework
|
Page generated in 0.0654 seconds