Spelling suggestions: "subject:"ensitivity 2analysis"" "subject:"ensitivity 3analysis""
371 |
A Computer-Based Decision Tool for Prioritizing the Reduction of Airborne Chemical Emissions from Canadian Oil Refineries Using Estimated Health ImpactsGower, Stephanie Karen January 2007 (has links)
Petroleum refineries emit a variety of airborne substances which may be harmful to human health. HEIDI II (Health Effects Indicators Decision Index II) is a computer-based decision analysis tool which assesses airborne emissions from Canada's oil refineries for reduction, based on ordinal ranking of estimated health impacts. The model was designed by a project team within NERAM (Network for Environmental Risk Assessment and Management) and assembled with significant stakeholder consultation. HEIDI II is publicly available as a deterministic Excel-based tool which ranks 31 air pollutants based on predicted disease incidence or estimated DALYS (disability adjusted life years). The model includes calculations to account for average annual emissions, ambient concentrations, stack height, meteorology/dispersion, photodegradation, and the population distribution around each refinery. Different formulations of continuous dose-response functions were applied to nonthreshold-acting air toxics, threshold-acting air toxics, and nonthreshold-acting CACs (criteria air contaminants). An updated probabilistic version of HEIDI II was developed using Matlab code to account for parameter uncertainty and identify key leverage variables. Sensitivity analyses indicate that parameter uncertainty in the model variables for annual emissions and for concentration-response/toxicological slopes have the greatest leverage on predicted health impacts. Scenario analyses suggest that the geographic distribution of population density around a refinery site is an important predictor of total health impact. Several ranking metrics (predicted case incidence, simple DALY, and complex DALY) and ordinal ranking approaches (deterministic model, average from Monte Carlo simulation, test of stochastic dominance) were used to identify priority substances for reduction; the results were similar in each case. The predicted impacts of primary and secondary particulate matter (PM) consistently outweighed those of the air toxics. Nickel, PAH (polycyclic aromatic hydrocarbons), BTEX (benzene, toluene, ethylbenzene and xylene), sulphuric acid, and vanadium were consistently identified as priority air toxics at refineries where they were reported emissions. For many substances, the difference in rank order is indeterminate when parametric uncertainty and variability are considered.
|
372 |
A SENSITIVITY ANALYSIS FOR RELATIVE IMPORTANCE WEIGHTS IN THE META-ANALYTIC CONTEXT: A STEP TOWARDS NARROWING THE THEORY-EMPIRICISM GAP IN TURNOVERField, James G 01 January 2017 (has links)
Turnover is one of the most important phenomena for management scholars and practitioners. Yet, researchers and practitioners are often frustrated by their inability to accurately predict why individuals leave their jobs. This should be worrisome given that total replacement costs can exceed 100% of an employee’s salary (Cascio, 2006) and can represent up to 40% of a firm’s pre-tax income (Allen, 2008). Motivated by these concerns, the purpose of this study was to assess the predictive validity of commonly-investigated correlates and, by extension, conceptualizations of employee turnover using a large-scale database of scientific findings. Results indicate that job satisfaction, organizational commitment, and embeddedness (e.g., person-job fit, person-organization fit) may be the most valid proximal predictors of turnover intention. Results for a tripartite analysis of the potential empirical redundancy between job satisfaction and organizational commitment when predicting turnover intention align well with previous research on this topic and generally suggest that the two constructs may be empirically indistinguishable in the turnover context. Taken together, this study has important implications for the turnover and sensitivity analysis literatures. With regard to the sensitivity analysis literature, this study demonstrates the application of a sensitivity analysis for relative importance weights in the meta-analytic context. This new method takes into account variance around the meta-analytic mean effect size estimate when imputing relative importance weights and may be adapted to other correlation matrix-based techniques (i.e., structural equation modeling) that are often used to test theory.
|
373 |
Improving microalgae biofuel production : an engineering management approachMathew, Domoyi Castro January 2014 (has links)
The use of microalgae culture to convert CO2 from power plant flue gases into biomass that are readily converted into biofuels offers a new frame of opportunities to enhance, compliment or replace fossil-fuel-use. Apart from being renewable, microalgae also have the capacity to utilise materials from a variety of wastewater and the ability to yield both liquid and gaseous biofuels. However, the processes of cultivation, incorporation of a production system for power plant waste flue gas use, algae harvesting, and oil extraction from the biomass have many challenges. Using SimaPro software, Life cycle Assessment (LCA) of the challenges limiting the microalgae (Chlorella vulgaris) biofuel production process was performed to study algae-based pathway for producing biofuels. Attention was paid to material use, energy consumed and the environmental burdens associated with the production processes. The goal was to determine the weak spots within the production system and identify changes in particular data-set that can lead to and lower material use, energy consumption and lower environmental impacts than the baseline microalgae biofuel production system. The analysis considered a hypothetical transesterification and Anaerobic Digestion (AD) transformation of algae-to- biofuel process. Life cycle Inventory (LCI) characterisation results of the baseline biodiesel (BD) transesterification scenario indicates that heating to get the biomass to 90% DWB accounts for 64% of the total input energy, while electrical energy and fertilizer obligations represents 19% and 16% respectively. Also, Life Cycle Impact Assessment (LCIA) results of the baseline BD production scenario show high proportional contribution of electricity and heat energy obligations for most impact categories considered relative to other resources. This is attributed to the concentration/drying requirement of algae biomass in order to ease downstream processes of lipid extraction and subsequent transesterification of extracted lipids into BD. Thus, four prospective alternative production scenarios were successfully characterised to evaluate the extent of their impact scenarios on the production system with regards to lowering material use, lower energy consumption and lower environmental burdens than the standard algae biofuel production system. A 55.3% reduction in mineral use obligation was evaluated as the most significant impact reduction due to the integration of 100% recycling of production harvest water for the AD production system. Recycling also saw water demand reduced from 3726 kg (freshwater).kgBD- 1 to 591kg (freshwater).kgBD- 1 after accounting for evaporative losses/biomass drying for the BD transesterification production process. Also, the use of wastewater/sea water as alternative growth media for the BD production system, indicated potential savings of: 4.2 MJ (11.8%) in electricity/heat obligation, 10.7% reductions for climate change impact, and 87% offset in mineral use requirement relative to the baseline production system. Likewise, LCIA characterisation comparison results comparing the baseline production scenarios with that of a set-up with co-product economic allocation consideration show very interesting outcomes. Indicating -12 MJ surplus (-33%) reductions for fossil fuels resource use impact category, 52.7% impact reductions for mineral use impact and 56.6% reductions for land use impact categories relative to the baseline BD production process model. These results show the importance of allocation consideration to LCA as a decision support tool. Overall, process improvements that are needed to optimise economic viability also improve the life cycle environmental impacts or sustainability of the production systems. Results obtained have been observed to agree reasonably with Monte Carlo sensitivity analysis, with the production scenario proposing the exploitation of wastewater/sea water to culture algae biomass offering the best result outcome. This study may have implications for additional resources such as production facility and its construction process, feedstock processing logistics and transport infrastructure which are excluded. Future LCA study will require extensive consideration of these additional resources such as: facility size and its construction, better engineering data for water transfer, combined heat and power plant efficiency estimates and the fate of long-term emissions such as organic nitrogen in the AD digestate. Conclusions were drawn and suggestions proffered for further study.
|
374 |
SENSITIVITY ANALYSIS – THE EFFECTS OF GLASGOW OUTCOME SCALE MISCLASSIFICATION ON TRAUMATIC BRAIN INJURY CLINICAL TRIALSLu, Juan 19 April 2010 (has links)
I. EFFECTS OF GLASGOW OUTCOME SCALE MISCLASSIFICATION ON TRAUMATIC BRAIN INJURY CLINICAL TRIALS The Glasgow Outcome Scale (GOS) is the primary endpoint for efficacy analysis of clinical trials in traumatic brain injury (TBI). Accurate and consistent assessment of outcome after TBI is essential to the evaluation of treatment results, particularly in the context of multicenter studies and trials. The inconsistent measurement or interobserver variation on GOS outcome, or for that matter, on any outcome scales, may adversely affect the sensitivity to detect treatment effects in clinical trial. The objective of this study is to examine effects of nondifferential misclassification of the widely used five-category GOS outcome scale and in particular to assess the impact of this misclassification on detecting a treatment effect and statistical power. We followed two approaches. First, outcome differences were analyzed before and after correction for misclassification using a dataset of 860 patients with severe brain injury randomly sampled from two TBI trials with known differences in outcome. Second, the effects of misclassification on outcome distribution and statistical power were analyzed in simulation studies on a hypothetical 800-patient dataset. Three potential patterns of nondifferential misclassification (random, upward and downward) on the dichotomous GOS outcome were analyzed, and the power of finding treatments differences was investigated in detail. All three patterns of misclassification reduce the power of detecting the true treatment effect and therefore lead to a reduced estimation of the true efficacy. The magnitude of such influence not only depends on the size of the misclassification, but also on the magnitude of the treatment effect. In conclusion, nondifferential misclassification directly reduces the power of finding the true treatment effect. An awareness of this procedural error and methods to reduce misclassification should be incorporated in TBI clinical trials. II. IMPACT OF MISCLASSIFICATION ON THE ORDINAL GLASGOW OUTCOME SCALE IN TRAUMATIC BRIAN INJURY CLINICAL TRIALS The methods of ordinal GOS analysis are recommended to increase efficiency and optimize future TBI trials. To further explore the utility of the ordinal GOS in TBI trials, this study extends our previous investigation regarding the effect of misclassification on the dichotomous GOS to examine the impact of misclassification on the 5-point ordinal scales. The impact of nondifferential misclassification on the ordinal GOS was explored via probabilistic sensitivity analyses using TBI patient datasets contained in the IMPACT database (N=9,205). Three patterns of misclassification including random, upward and downward patterns were extrapolated, with the pre-specified outcome classification error distributions. The conventional 95% confidence intervals and the simulation intervals, which account for the misclassification only and the misclassification and random errors together, were reported. Our simulation results showed that given a specification of a minimum of 80%, modes of 85% and 95% and a maximum of 100% for both sensitivity and specificity (random pattern), or given the same trapezoidal distributed sensitivity but a perfect specificity (upward pattern), the misclassification would have caused an underestimated ordinal GOS in the observed data. In another scenario, given the same trapezoidal distributed specificity but a perfect sensitivity (downward pattern), the misclassification would have resulted in an inflated GOS estimation. Thus, the probabilistic sensitivity analysis suggests that the effect of nondifferential misclassification on the ordinal GOS is likely to be small, compared with the impact on the binary GOS situation. The results indicate that the ordinal GOS analysis may not only gain the efficiency from the nature of the ordinal outcome, but also from the relative smaller impact of the potential misclassification, compared with the conventional binary GOS analysis. Nevertheless, the outcome assessment following TBI is a complex problem. The assessment quality could be influenced by many factors. All possible aspects must be considered to ensure the consistency and reliability of the assessment and optimize the success of the trial. III. A METHOD FOR REDUCING MISCLASSIFICATION IN THE EXTENDED GLASGOW OUTCOME SCORE The eight-point extended Glasgow Outcome Scale (GOSE) is commonly used as the primary outcome measure in traumatic brain injury (TBI) clinical trials. The outcome is conventionally collected through a structured interview with the patient alone or together with a caretaker. Despite the fact that using the structured interview questionnaires helps reach agreement in GOSE assessment between raters, significant variation remains among different raters. We introduce an alternate GOSE rating system as an aid in determining GOSE scores, with the objective of reducing inter-rater variation in the primary outcome assessment in TBI trials. Forty-five trauma centers were randomly assigned to three groups to assess GOSE scores on sample cases, using the alternative GOSE rating system coupled with central quality control (Group 1), the alternative system alone (Group 2), or conventional structured interviews (Group 3). The inter-rater variation between an expert and untrained raters was assessed for each group and reported through raw agreement and with weighted kappa (k) statistics. Groups 2 and 3 without central review yielded inter-rater agreements of 83% (weighted k¼0.81; 95% CI 0.69, 0.92) and 83% (weighted k¼0.76, 95% CI 0.63, 0.89), respectively, in GOS scores. In GOSE, the groups had an agreement of 76% (weighted k¼0.79; 95% CI 0.69, 0.89), and 63% (weighted k¼0.70; 95% CI 0.60, 0.81), respectively. The group using the alternative rating system coupled with central monitoring yielded the highest inter-rater agreement among the three groups in rating GOS (97%; weighted k¼0.95; 95% CI 0.89, 1.00), and GOSE (97%; weighted k¼0.97; 95% CI 0.91, 1.00). The alternate system is an improved GOSE rating method that reduces inter-rater variations and provides for the first time, source documentation and structured narratives that allow a thorough central review of information. The data suggest that a collective effort can be made to minimize inter-rater variation.
|
375 |
Modélisation de la demande énergétique des bâtiments à l'échelle urbaine : contribution de l'analyse de sensibilité à l'élaboration de modèles flexibles / Modeling energy demand of buildings at urban scaleGarcia Sanchez, David 29 October 2012 (has links)
Pour répondre aux enjeux énergétiques et climatiques, une des échelles d’action pertinentes est désormais celle du quartier ou de la ville. Des besoins de connaissance, d’outils d’aide à la décision et d’évaluation à cette échelle se manifestent de plus en plus. Un des volets concerne la modélisation de la demande d’énergie des bâtiments résidentiels, préalable à la mise en place d’actions de rénovation de l’existant ou à la valorisation de sources d’énergie locales. La diversité de situations de terrains, d’objectifs d’acteurs et de contextes de disponibilité de données incitent à rechercher des modèles flexibles, aptes à produire de l’information pour différentes applications, à partir de jeux alternatifs de données d’entrée, combinant des modèles de natures diverses (notamment physiques et statistiques) selon les besoins. Dans cet esprit, le présent travail cherche à explorer le potentiel de méthodes dites ascendantes, s’appuyant sur des modèles développés à l’origine pour la simulation à l’échelle d’un bâtiment isolé, mais extrapolés ici pour le parc de bâtiments d’une zone urbaine sur la base de bâtiments types. Les deux questions clés abordées sont celles de la sélection des bâtiments types et de la reconstitution des données d’entrée pertinentes sur le plan statistique pour la zone étudiée. Des techniques d’analyse de sensibilité, en particulier la méthode des effets élémentaires de Morris, ont été appliquées à un code de calcul thermique de bâtiment (ESP-r). Elles ont mis en évidence une réponse non linéaire du modèle, notamment du fait des interactions entre paramètres et de la dispersion des paramètres d’entrée. Elles ont permis d’identifier les paramètres les plus sensibles et les plus en interaction (concernant les bâtiments eux-mêmes, leur environnement ou leurs habitants), sur lesquels doit être concentré le travail de collecte ou de reconstitution statistique. Un modèle, dénommé MEDUS, de reconstitution de la distribution des besoins de chaleur sur un quartier à partir de trois typologies de bâtiments, a été développé et testé sur le secteur St-Félix à Nantes. Il est alimenté par des données INSEE à l’échelle d’un IRIS. Ses résultats sont analysés, à la fois sous l’angle de la pertinence des typologies choisies et dans une perspective d’application à l’échelle du quartier. / Urban scale is now considered as one of the most relevant scales to face energy and climate challenges. Specific needs for knowledge, decision making tools and evaluation are identified at urban scale. Modelling energy demand from residential buildings is one key aspect, priorto energy retrofitting of existing building asset or to valorisation of local energy sources. Diversity of local contexts, stake holder goals and data availability lead to search flexible models, with ability to produce information for different applications, from alternative input data sets, combining different types of basic models (namely both physical and statistical ones), according to user needs. The present work is exploring the potential of bottom-up approaches, based on engineering models, developed originally for isolated buildings. These models are extrapolated for the complete set of buildings in a city or neighbourhood, based on building archetypes. Two key questions tackled are the selection of suitable archetypes and the reconstitution of relevant input data, statistically representative for the area of interest Sensitivity analysis techniques have been applied to a thermal simulation programme (ESP-r), particularly the Morris elementary effects method. A non-linear response of the model has been emphasized, caused by scattering of input parameters and interaction effects. The most influencing and interacting parameters have been identified. They concern the buildings themselves, their environment and the inhabitants. Data collection or statistical reconstitution must be concentrated in priority to these main parameters. A model of the heat demand at a neighbourhood scale has been developed and tested on the sector St-Félix in Nantes. It is called MEDUS (Modelling Energy Demand at Urban Scale). Application is based on three building archetypes. Census data (INSEE) available at the sector scale are the main input data. Results are analyzed both to check archetype relevancy and to study a possible application for evaluating actions at sector scale, such as energy retrofitting.
|
376 |
Propagation d'incertitudes et analyse de sensibilité pour la modélisation de l'infiltration et de l'érosion / Uncertainty propagation and sensitivity analysis for infiltration and erosion modelingRousseau, Marie 17 December 2012 (has links)
Nous étudions la propagation et la quantification d'incertitudes paramétriques au travers de modèles hydrologiques pour la simulation des processus d'infiltration et d'érosion en présence de pluie et/ou de ruissellement. Les paramètres incertains sont décrits dans un cadre probabiliste comme des variables aléatoires indépendantes dont la fonction de densité de probabilité est connue. Cette modélisation probabiliste s'appuie sur une revue bibliographique permettant de cerner les plages de variations des paramètres. L'analyse statistique se fait par échantillonage Monte Carlo et par développements en polynômes de chaos. Nos travaux ont pour but de quantifier les incertitudes sur les principales sorties du modèle et de hiérarchiser l'influence des paramètres d'entrée sur la variabilité de ces sorties par une analyse de sensibilité globale. La première application concerne les effets de la variabilité et de la spatialisation de la conductivité hydraulique à saturation du sol dans le modèle d'infiltration de Green--Ampt pour diverses échelles spatiales et temporelles. Notre principale conclusion concerne l'importance de l'état de saturation du sol. La deuxième application porte sur le modèle d'érosion de Hairsine--Rose. Une des conclusions est que les interactions paramétriques sont peu significatives dans le modèle de détachement par la pluie mais s'avèrent importantes dans le modèle de détachement par le ruissellement / We study parametric uncertainty propagation and quantification in hydrological models for the simulation of infiltration and erosion processes in the presence of rainfall and/or runoff. Uncertain input parameters are treated in a probabilistic framework, considering them as independent random variables defined by a fixed probability density function. This probabilistic modeling is based on a literature review to identify the range of variation of input parameters. The output statistical analysis is realized by Monte Carlo sampling and by polynomial chaos expansions. Our analysis aims at quantifying uncertainties in model outputs and establishing a hierarchy within input parameters according to their influence on output variability by means of global sensitivity analysis. The first application concerns the variability and spatial localization of the soil saturated hydraulic conductivity in the Green-Ampt infiltration model at different spatial and temporal scales. Our main conclusion is the importance of the soil saturation state. The second application deals with the Harisine--Rose erosion model. One conclusion is that the parametric interactions are not significant in the rainfall detachment model, but they prove to be important in the runoff detachment model
|
377 |
Etude régionale des crues éclair de l'arc méditerranéen français. Elaboration de méthodologies de transfert à des bassins versants non jaugés / Flash floods in the french mediterranean region ; toward transfer methodologies for ungauged catchmentsGarambois, Pierre-André 23 November 2012 (has links)
D’un point de vue climatique la région méditerranéenne est propice aux évènements pluvio-orageux intenses, particulièrement en automne. Ces pluies s’abattent sur des bassins versants escarpés. La promptitude des crues ne laisse qu’un temps très court pour la prévision. L’amplitude de ces crues dépend de la grande variabilité des pluies et des caractéristiques des bassins versants. Les réseaux d'observations ne sont habituellement pas adaptés à ces petites échelles spatiales et l'intensité des événements affecte souvent la fiabilité des données quand elles existent d’où l’existence de bassin non jaugés. La régionalisation en hydrologie s’attache à la détermination de variables hydrologiques aux endroits où ces données manquent. L’objectif de cette thèse est de contribuer à poser les bases d’une méthodologie adaptée à la transposition des paramètres d'un modèle hydrologique distribué dédié aux crues rapides de bassins versants bien instrumentés à des bassins versants non jaugés, et ce sur une large zone d’étude. L’outil utilisé est le modèle hydrologique distribué MARINE [Roux et al., 2011] dont l’une des originalités est de disposer d’un modèle adjoint permettant de mener à bien des calibrations et des analyses de sensibilité spatio-temporelles qui servent à améliorer la compréhension des mécanismes de crue et à l’assimilation de données en temps réel pour la prévision. L’étude des sensibilités du modèle MARINE aborde la compréhension des processus physiques. Une large gamme de comportements hydrologiques est explorée. On met en avant quelques types de comportements des bassins versants pour la région d’étude [Garambois et al., 2012a]. Une sélection des évènements de calibration et une technique de calibration multi évènements aident à l’extraction d’un jeu de paramètres par bassin versant. Ces paramétrisations sont testées sur des évènements de validation. Une méthode de décomposition de la variance des résultats conduit aux sensibilités temporelles du modèle à ses paramètres. Cela permet de mieux appréhender la dynamique des processus physiques rapides en jeu lors de ces crues [Garambois et al., 2012c]. Les paramétrisations retenues sont transférées à l’aide de similarités hydrologiques sur des bassins versants non jaugés, à des fins de prévision opérationnelle / Climate and orography in the Mediterranean region tend to promote intense rainfalls, particularly in autumn. Storms often hit steep catchments. Flood quickness only let a very short time lapse for forecasts. Peak flow intensity depends on the great variability of rainfalls and catchment characteristics. As a matter of facts, observation networks are not adapted to these small space-time scales and event severity often affects data fiability when they exist thus the notion of ungauged catchment emerges. Regionalization in hydrology seeks to determine hydrological variables at locations where these data lack. This work contributes to pose the bases of a methodology adapted to transpose parameterizations of a flash flood dedicated distributed hydrologic model from gauged catchments to ungauged ones, and for a large study area. The MARINE distributed hydrologic model is used [Roux et al., 2011], its originality lies in the automatically differentiated adjoint model able to perform calibrations and spatial-temporal sensitivity analysis, in order to improve understanding in flash flood generating mechanisms and real time data assimilation for hydrometeorological forecasts. MARINE sensitivity analysis addresses the question of physical process understanding. A large panel of hydrologic behaviours is explored. General catchment behaviours are highlighted for the study area [Garambois et al., 2012a]. Selected flood events and a multiple events calibration technique help to extract catchment parameter sets. Those parameterizations are tested on validation events. A variance decomposition method leads to parameter temporal sensitivity analysis. It enables better understanding in catching dynamics of physical processes involved in flash floods formation [Garambois et al., 2012c]. Parameterizations are then transfered from gauged catchments with hydrologic similarity to ungauged ones with a view to develop real time flood forecasting
|
378 |
Economic analysis and Monte Carlo simulation of community wind generation in rural western KansasHalling, Todd January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Anil Pahwa / Energy costs are rising, supplies of fossil fuels are diminishing, and environmental concerns surrounding power generation in the United States are at an all-time high. The United States is continuing to push all states for energy reform and where better for Kansas to look than wind energy? Kansas is second among all states in wind generation potential; however, the best wind generation sites are located predominantly in sparsely populated areas, creating energy transportation problems. Due to these issues interest in community wind projects has been increasing. To determine the economic potential of community wind generation a distribution system in rural western Kansas where interest in community wind exists was examined and a feasibility study based on historical data, economic factors, and current grid constraints was performed. Since the majority of the load in this area is from pivot-point irrigation systems, load distributions were created based on temperature ranges instead of a linear progression of concurrent days. To test the economic viability three rate structures were examined: flat energy rate, demand rate, and critical peak pricing. A Monte Carlo simulation was designed and run to simulate twenty-year periods based on the available historical data; twenty-year net present worth calculations were performed to ensure economic viability. A sensitivity analysis was then performed to examine the effects of change in turbine size and energy rate scale. Finally, an energy storage analysis was performed to examine the economic viability of various sizes of battery storage systems.
|
379 |
Modélisation du bilan carboné et hydrique d’une forêt méditerranéenne à structure complexe : de l'année au siècle / Carbon and water budget modelling for a highly structured mediterranean forest : from years to centuryMarie, Guillaume 19 September 2014 (has links)
Le bilan de carbone des écosystèmes forestiers implique de nombreux processus, rendant difficile la prédiction de leurs réponses aux changements climatiques. A des échelles larges, les processus écologiques ne peuvent être modélisés que de manière simplifiée et doivent donc se focaliser sur les processus importants. Par ailleurs, le développement de forêts mélangées est de plus en plus encouragé. Or ce type de forêt présente des degrés de complexité supplémentaires. D'une part la structuration du couvert en 3D est susceptible d'influencer les flux de carbone, et d'autre part les espèces coexistantes peuvent répondre de manière différentes aux changements climatiques. La forêt de Font-Blanche constitue un cas d'étude original car elle est spatialement hétérogène. De plus, les modèles climatiques prédisent une réduction importante des précipitations au cours du XXIe siècle en région méditerranéenne. Mais l'échelle du siècle peu être exigeante en temps de calcul lorsqu'on veut à prendre en compte la structure de la canopée. Dans cette these j'ai donc modifié le domaine d'utilisation d'un modèle d'écosystème méchaniste, de l'année au siècle, grâce à la technique méta-modélisation. Le méta-modéle a donné de bons résultats qui m'ont permis de réaliser une étude d'impact du changement climatique à l'échelle du siècle, sur la forest de Font-Blanche. Les résultats montrent que la représentation spatiale du couvert et l'effet de rétroiaction du bilan hydrique, jouent un rôle important et ne peuvent pas être simplifiés à long-terme à cause de la dynamique des espèces qui la composent qui représente la plus grande source de variations du bilan de carbone. / The carbon balance of forest ecosystems involves many complex processes. At larger scales, ecological processes can not be modelled in a simplified way, but these have not been clearly identified. Furthermore, the development of mixed forest is increasingly promoted and this type of stand has additional degrees of complexity. On the one hand, complex canopy structure is likely to influence carbon fluxes, and other coexisting species may respond differently to climate change. Font-Blanche forest is an original case study that has not been studied in modelling because of its heterogeneity. In add, climate models predict significant reductions in rainfall during the 21st century for the Mediterranean region; But the century time scale maybe very demanding in computation time if ones want to taking into account the canopy structure. Then in this thesis we are modified a 3D mechanistic forest ecosystem model (noTG) to extend its temporal scale from year to century, thanks to meta-modelling technique. The meta-modelling gives good results and we used the meta-modeled version of noTG (notgmeta) to predict carbon and water balance of Font-blanche forest between 2008-2100 according to differents climate change scenario. According to model simplification, we find that photosynthesis, soil respiration and plant respiration are stimulated until 2100 with a decrease of this stimulation at the end of the simulation. We find that spatial representation of canopy and feedback effect of the water balance plays an important role and can not be simplified in the long-term simulation since the dynamics of species represents the largest source of carbon balance variations.
|
380 |
[en] RELIABILITY BASED OPTIMIZATION: APPLICATION TO SPACE TRUSSES / [pt] OTIMIZAÇÃO BASEADA EM CONFIABILIDADE: APLICAÇÃO A TRELIÇAS ESPACIAISANDERSON PEREIRA 25 September 2007 (has links)
[pt] No projeto de estruturas de engenharia há, freqüentemente,
incertezas
associadas µas propriedades dos materiais, nas
propriedades geométricas e aos
carregamentos. A maneira mais comum e tradicional para se
levar em conta
estas incertezas é através da definição dos valores de
projeto como o resultado
do produto do valor característico das variáveis
aleatórias por um fator parcial
de segurança. Esta solução, no entanto, falha ao não
permitir a quantificação
da confiabilidade do projeto ótimo uma vez que um fator
grande de segurança
pode não significar uma confiabilidade mais alta. Para se
considerar a natureza probabilística de quantidades como
propriedades dos materiais, carregamentos, etc., tem-se
que identificar e definir estas quantidades como variáveis
aleatórias no modelo de análise. Desta maneira, a
probabilidade de falha (ou
a confiabilidade) de uma estrutura sujeita a uma restrição
de desempenho na
forma de uma função de estado limite pode, então, ser
calculada e formulada
como uma restrição num problema de otimização. Neste
trabalho, restrição
probabilísticas são incorporadas ao esquema tradicional de
otimização estrutural. A formulação e os métodos numéricos
para este processo, comumente
chamado de otimização baseada em confiabilidade, são
descritos. O objetivo
principal é apresentar um sistema computacional capaz de
resolver problemas
de otimização de forma e de dimensões de treliças
espaciais baseado em confiabilidade. Podem ser
consideradas como variáveis, determinísticas ou aleatórias,
as seções transversais, as coordenadas nodais, as
propriedades dos materiais
(módulo de elasticidade e tensão de escoamento) e os
carregamentos. De maneira a tratar os problemas de
instabilidade global são considerados os efeitos
da não-linearidade geométrica no comportamento da
estrutura e uma restrição formulada para uma função de
estado limite associada na carga de colapso é
incluída. Funções de estado limite referentes aos
deslocamentos e nas tensões
também são consideradas. A flambagem global das barras é
considerada por
meio da carga crítica de Euler / [en] Uncertainties associated with random variables, such as,
the material
proprieties and loads, are inherent to the design of
structures. These uncertainties are traditionally taken
into account in the project before the design
by defining design values for the random variables. The
design values of the
random variables are obtained from statistical properties
of the random variables and from partial safety factors.
Once these values are defined the variables
are treated as deterministic variables in the design
process. This approach has
been followed in the conventional design optimization and
in many design codes such as the Brazilian code for the
design of steel and concrete structures.
This simple approach, however, does not allow an estimate
of the structural
reliability of the resulting project which may have a low
(unsafe structure)
or a very high (expensive structure) reliability. To
overcome this problem a
reliability analysis must be incorporated into the
traditional design optimization. Design optimization,
incorporating reliability analyses, has been denoted
Reliability-Based Design Optimization (RBDO). In RBDO, the
constraints are
defined in terms of the probabilities of failure
associated with some prescribed
failure functions and therefore, it requires, as in the
reliability analysis, the
definition of the random variables and information about
their statistical properties. In this work, RBDO is
applied to the shape and sizing optimization of
spatial trusses considering geometric nonlinearities. The
constraints considered
in the RBDO problem are related to the following failure
mechanisms: to the
global collapse (limit load), to local buckling and yield
stress and to serviceability conditions (displacement
bounds). The algorithms used for solving the
optimization problem and for performing the reliability
analysis are described.
|
Page generated in 0.0609 seconds