41 |
Nutrient Uptake Estimates for Woody Species as Described by the NST 3.0, SSAND, and PCATS Mechanistic Nutrient Uptake ModelsLin, Wen 31 August 2009 (has links)
With the advent of the personal computer, mechanistic nutrient uptake models have become widely used as research and teaching tools in plant and soil science. Three models NST 3.0, SSAND, and PCATS have evolved to represent the current state of the art. There are two major categories of mechanistic models, transient state models with numerical solutions and steady state models. NST 3.0 belongs to the former model type, while SSAND and PCATS belong to the latter. NST 3.0 has been used extensively in crop research but has not been used with woody species. Only a few studies using SSAND and PCATS are available. To better understand the similarities and differences of these three models, it would be useful to compare model predictions with experimental observations using multiple datasets from the literature to represent various situations for woody species. Therefore, the objectives of this study are to: (i) compare the predictions of uptake by the NST 3.0, SSAND, and PCATS models for a suite of nutrients against experimentally measured values, (ii) compare the behavior of the three models using a one dimensional sensitivity analysis; and (iii) compare and contrast the behavior of NST 3.0 and SSAND using a multiple dimensional sensitivity analysis approach. Predictions of nutrient uptake by the three models when run with a common data set were diverse, indicating a need for a reexamination of model structure. The failure of many of the predictions to match observations indicates the need for further studies which produce representative datasets so that the predictive accuracy of each model can be evaluated. Both types of sensitivity analyses suggest that the effect of soil moisture on simulation can be influential when nutrient concentration in the soil solution (CLi) is low. One dimensional sensitivity analysis also revealed that Imax negatively influenced the uptake estimates from the SSAND and PCATS models. Further analysis indicates that this counter intuitive response of Imax is probably related to low soil nutrient supply. The predictions of SSAND under low-nutrient-supply scenarios are generally lower than those of NST 3.0. We suspect that both of these results are artifacts of the steady state models and further studies to improve them, such as incorporating important rhizospheric effects, are needed if they are to be used successfully for the longer growth periods and lower soil nutrient supply situations more typical of woody species. / Master of Science
|
42 |
Sensitivity analysis and evolutionary optimization for building designWang, Mengchao January 2014 (has links)
In order to achieve global carbon reduction targets, buildings must be designed to be energy efficient. Building performance simulation methods, together with sensitivity analysis and evolutionary optimization methods, can be used to generate design solution and performance information that can be used in identifying energy and cost efficient design solutions. Sensitivity analysis is used to identify the design variables that have the greatest impacts on the design objectives and constraints. Multi-objective evolutionary optimization is used to find a Pareto set of design solutions that optimize the conflicting design objectives while satisfying the design constraints; building design being an inherently multi-objective process. For instance, there is commonly a desire to minimise both the building energy demand and capital cost while maintaining thermal comfort. Sensitivity analysis has previously been coupled with a model-based optimization in order to reduce the computational effort of running a robust optimization and in order to provide an insight into the solution sensitivities in the neighbourhood of each optimum solution. However, there has been little research conducted to explore the extent to which the solutions found from a building design optimization can be used for a global or local sensitivity analysis, or the extent to which the local sensitivities differ from the global sensitivities. It has also been common for the sensitivity analysis to be conducted using continuous variables, whereas building optimization problems are more typically formulated using a mixture of discretized-continuous variables (with physical meaning) and categorical variables (without physical meaning). This thesis investigates three main questions; the form of global sensitivity analysis most appropriate for use with problems having mixed discretised-continuous and categorical variables; the extent to which samples taken from an optimization run can be used in a global sensitivity analysis, the optimization process causing these solutions to be biased; and the extent to which global and local sensitivities are different. The experiments conducted in this research are based on the mid-floor of a commercial office building having 5 zones, and which is located in Birmingham, UK. The optimization and sensitivity analysis problems are formulated with 16 design variables, including orientation, heating and cooling setpoints, window-to-wall ratios, start and stop time, and construction types. The design objectives are the minimisation of both energy demand and capital cost, with solution infeasibility being a function of occupant thermal comfort. It is concluded that a robust global sensitivity analysis can be achieved using stepwise regression with the use of bidirectional elimination, rank transformation of the variables and BIC (Bayesian information criterion). It is concluded that, when the optimization is based on a genetic algorithm, that solutions taken from the start of the optimization process can be reliably used in a global sensitivity analysis, and therefore, there is no need to generate a separate set of random samples for use in the sensitivity analysis. The extent to which the convergence of the variables during the optimization can be used as a proxy for the variable sensitivities has also been investigated. It is concluded that it is not possible to identify the relative importance of variables through the optimization, even though the most important variable exhibited fast and stable convergence. Finally, it is concluded that differences exist in the variable rankings resulting from the global and local sensitivity methods, although the top-ranked solutions from each approach tend to be the same. It also concluded that the sensitivity of the objectives and constraints to all variables is obtainable through a local sensitivity analysis, but that a global sensitivity analysis is only likely to identify the most important variables. The repeatability of these conclusions has been investigated and confirmed by applying the methods to the example design problem with the building being located in four different climates (Birmingham, UK; San Francisco, US; and Chicago, US).
|
43 |
Designing the Intermodal Multiperiod Transportation Network of a Logistic Service Provider Company for Container ManagementSahlin, Tobias January 2016 (has links)
Lured by the promise of bigger sales, companies are increasingly looking to raise the volume of international trade. Consequently, the amount of bulk products carried in containers and transported overseas exploded because of the flexibility and reliability of this type of transportation. However, minimizing the logistics costs arising from the container flow management across different terminals has emerged asa major problem that companies and affiliated third-party logistics firms face routinely. The empty tankcontainer allocation problem occurs in the context of intermodal distribution systems management and transportation operations carried out by logistic service provider companies. This paper considers the time-evolving supply chain system of an international logistic service provider company that transports bulk products loaded in tank containers via road, rail and sea. In such system, unbalanced movements of loaded tank containers forces the company to reposition empty tank containers. The purpose of this paper is to develop a mathematical model that supports tactical decisions for flow management of empty tank containers. The problem involves dispatching empty tank containers of various types to the meet on-time delivery requirements and repositioning the other tank containers to storage facilities, depots and cleaning stations. To this aim, a mixed-integer linear programming (MILP) multiperiod optimization model is developed. The model is analyzed and developed step by step, and its functionality is demonstrated by conducting experiments on the network from our case study problem, within the boarders of Europe. The case study constitutes three different scenarios of empty tank container allocation. The computational experiments show that the model finds good quality solutions, and demonstrate that cost and modality improvements can be achieved in the network The sensitivity analysis employs a set of data from our case study and randomly selected data to highlight certain features of the model and provide some insights regarding the model’s behavior.
|
44 |
Energy Considerations for Pipe Replacement in Water Distribution SystemsProsser, MONICA 21 August 2013 (has links)
Water utilities are facing pressure to continue to provide high-quality potable water in an increasingly energy constrained world; managing the ageing infrastructure that exists in many countries is a challenge in and of itself, but recently this has been coupled with political and public attention to the environmental impacts of the distribution system. Utility managers need to take a holistic approach to decision-making in order to determine all of the impacts of their plans.
The intention of this thesis is to present a set of considerations for utility planners and managers to provide clarity to the trade-offs associated with any pipe replacement decision. This research has examined the energy relationships between operational energy reduction and the embodied energy tied to replacing deteriorated pipes in water distribution networks. These relationships were investigated through the development and application of a life-cycle energy analysis (LCEA) for three different pipe replacement schedules developed with the intent to reduce leakage in the system. The results showed that the embodied energy for pipe replacement is significant even when compared against the large amount of energy required to operate a large-scale water utility. The annual operational energy savings of between 8.9 and 9.6 million kWh achieved by 2070 through pipe replacement comes at a cost; 0.88-2.05 million kWh/mile for replacement with ductile iron pipes with diameters of 6” to 16” respectively. This imbalance resulted in a maximum energy payback period of 17.6 years for the most aggressive replacement plan in the first decade. Some of the assumptions that were used to complete the LCEA were investigated through a sensitivity analysis; specific factors that were numerically queried in this chapter include the break rate forecasting method, pumping efficiency, the leakage duration and the flow rate per leakage event.
Accurate accounting of energy requirements for pipe replacement will become even more important as energy and financial constraints continue to increase for most water utilities, this thesis provides guidance on some of the complex relationships that need to be considered. / Thesis (Master, Civil Engineering) -- Queen's University, 2013-08-21 16:51:18.963
|
45 |
Crop model parameter estimation and sensitivity analysis for large scale data using supercomputersLamsal, Abhishes January 1900 (has links)
Doctor of Philosophy / Department of Agronomy / Stephen M. Welch / Global crop production must be doubled by 2050 to feed 9 billion people. Novel crop improvement methods and management strategies are the sine qua non for achieving this goal. This requires reliable quantitative methods for predicting the behavior of crop cultivars in novel, time-varying environments. In the last century, two different mathematical prediction approaches emerged (1) quantitative genetics (QG) and (2) ecophysiological crop modeling (ECM). These methods are completely disjoint in terms of both their mathematics and their strengths and weaknesses. However, in the period from 1996 to 2006 a method for melding them emerged to support breeding programs.
The method involves two steps: (1) exploiting ECM’s to describe the intricate, dynamic and environmentally responsive biological mechanisms determining crop growth and development on daily/hourly time scales; (2) using QG to link genetic markers to the values of ECM constants (called genotype-specific parameters, GSP’s) that encode the responses of different varieties to the environment. This can require huge amounts of computation because ECM’s have many GSP’s as well as site-specific properties (SSP’s, e.g. soil water holding capacity). Moreover, one cannot employ QG methods, unless the GSP’s from hundreds to thousands of lines are known. Thus, the overall objective of this study is to identify better ways to reduce the computational burden without minimizing ECM predictability.
The study has three parts: (1) using the extended Fourier Amplitude Sensitivity Test (eFAST) to globally identify parameters of the CERES-Sorghum model that require accurate estimation under wet and dry environments; (2) developing a novel estimation method (Holographic Genetic Algorithm, HGA) applicable to both GSP and SSP estimation and testing it with the CROPGRO-Soybean model using 182 soybean lines planted in 352 site-years (7,426 yield observations); and (3) examining the behavior under estimation of the anthesis data prediction component of the CERES-Maize model. The latter study used 5,266 maize Nested Associated Mapping lines and a total 49,491 anthesis date observations from 11 plantings.
Three major problems were discovered that challenge the ability to link QG and ECM’s: 1) model expressibility, 2) parameter equifinality, and 3) parameter instability. Poor expressibility is the structural inability of a model to accurately predict an observation. It can only be solved by model changes. Parameter equifinality occurs when multiple parameter values produce equivalent model predictions. This can be solved by using eFAST as a guide to reduce the numbers of interacting parameters and by collecting additional data types. When parameters are unstable, it is impossible to know what values to use in environments other than those used in calibration. All of the methods that will have to be applied to solve these problems will expand the amount of data used with ECM’s. This will require better optimization methods to estimate model parameters efficiently. The HGA developed in this study will be a good foundation to build on. Thus, future research should be directed towards solving these issues to enable ECM’s to be used as tools to support breeders, farmers, and researchers addressing global food security issues.
|
46 |
Rozhodovací model exportu pro Českou repoubliku / Export Decision Support Model for the Czech RepublicCouceiro Vlasak, Carlos January 2016 (has links)
In this paper, an Export Decision Support Model applied to the Czech Republic is developed, with the aim of finding export opportunities. The model functions using a filtering process in which a stream of data composed of numerous socio-economic indicators representing the world trade is analysed. For their construction, an extensive literature review was developed relying strongly on a previous EDSM targeted as well for the Czech Republic, as at the moment no explicit rule exist describing its appropriate composition. Then, if a given market, determined by its associated matrix of indicators, fulfils the conditions of the model, then it is retrieved as an export opportunity. After the model construction, it is supplied with two streams of data, for 2010 and for 2014 and, the hypothesis that for both years the output is equal is evaluated. With the intention to infer if the constructed model needs periodical recalibrations for its appropriate use. Finally, a local sensitivity analysis is deployed uncovering the behaviour of the different parameters of the model, a novel approach not yet implemented in an EDSM tailor made for the Czech Republic. JEL Classification F10, F13, F23, M31 Keywords export opportunity, entrepreneurship, international marketing, sensitivity analysis, trade Author's e-mail...
|
47 |
Addressing inequalities in eye health with subsidies and increased fees for General Ophthalmic Services in socio-economically deprived communities: a sensitivity analysisShickle, D., Todkill, D., Chisholm, Catharine M., Rughani, S., Griffin, M., Cassels-Brown, A., May, H., Slade, S.V., Davey, Christopher J. 07 November 2014 (has links)
No / Objectives: Poor knowledge of eye health, concerns about the cost of spectacles, mistrust of
optometrists and limited geographical access in socio-economically deprived areas are
barriers to accessing regular eye examinations and result in low uptake and subsequent
late presentation to ophthalmology clinics. Personal Medical Services (PMS) were introduced
in the late 1990s to provide locally negotiated solutions to problems associated with
inequalities in access to primary care. An equivalent approach to delivery of optometric
services could address inequalities in the uptake of eye examinations.
Study design: One-way and multiway sensitivity analyses.
Methods: Variations in assumptions were included in the models for equipment and accommodation
costs, uptake and length of appointments. The sensitivity analyses thresholds
were cost-per-person tested below the GOS1 fee paid by the NHS and achieving breakeven
between income and expenditure, assuming no cross-subsidy from profits from sales
of optical appliances.
Results: Cost per test ranged from £24.01 to £64.80 and subsidy required varied from £14,490
to £108,046. Unused capacity utilised for local enhanced service schemes such as glaucoma
referral refinement reduced the subsidy needed. / Yorkshire Eye Research, NHS Leeds, RNIB
|
48 |
Matrix Dynamic Models for Structured PopulationsIslam, Md Sajedul 01 December 2019 (has links)
Matrix models are formulated to study the dynamics of the structured populations. We consider closed populations, that is, without migration, and populations with migration. The effects of specific patterns of migration, whether with constant or time-dependent terms, are explored within the context of how they manifest in model output, such as population size. Time functions, commonly known as relative sensitivities, are employed to rank the parameters of the models from most to least influential in the population size or abundance of individuals per group
|
49 |
Limit Cycles and Dynamics of Rumor ModelsOdero, Geophrey Otieno, Mr. 01 December 2013 (has links)
This thesis discusses limit cycles and behavior of rumor models. The first part presents the deterministic Daley-Kendall model (DK) with arrivals and departures and comparison of the Susceptibles, Infectives and Removed (SIR) model and the DK model. The second result is a part of the qualitative analysis and the general behavior of extension of the Daley-Kendall model. Here we discuss how the halting rate of spreaders causes the model to change from a stable equilibrium or a stable limit cycle. In the third part we carry out model validation and use both synthetic data and real data sets and fit them to the numerical solutions of the extended Daley-Kendall model. Finally, we find the parameter estimates and standard errors. In this way we shall be able to decide whether the numerical solutions quantifying the relationships between the variables obtained from the qualitative analysis can be accepted as the best description of the data. We discuss sensitivity analysis results and traditional sensitivity functions.
|
50 |
Etude de la complémentarité et de la fusion des images qui seront fournies par les futurs capteurs satellitaires OLCI/Sentinel 3 et FCI/Meteosat Troisième Génération / Study of the complementarity and the fusion of the images that will be provided by the future satellite sensors OLCI/Sentinel-3 and FCI/Meteosat Third GenerationPeschoud, Cécile 17 October 2016 (has links)
L’objectif de cette thèse était de proposer, valider et comparer des méthodes de fusion d’images provenant d’un capteur héliosynchrone multispectral et d’un capteur géostationnaire multispectral, pour produire des cartes de composition de l’eau détaillées spatialement et les mieux rafraîchies possibles. Notre méthodologie a été appliquée au capteur héliosynchrone OLCI sur Sentinel-3 et au capteur géostationnaire FCI sur Météosat Troisième Génération. Dans un premier temps, la sensibilité des deux capteurs à la couleur de l’eau a été analysée. Les images des capteurs OLCI et FCI n’étant pas encore disponibles, ont donc été simulées sur le Golfe du Lion, grâce à des cartes d’hydrosols (chlorophylle, matières en suspension et matières organiques dissoutes) et à des modèles de transfert radiatifs (Hydrolight et Modtran). Deux méthodes de fusion ont ensuite été adaptées puis testées à partir des images simulées : la méthode SSTF (Spatial, Spectral, Temporal Fusion) inspirée de la fusion de (Vanhellemont et al., 2014) et la méthode STARFM (Spatial Temporal Adaptative Reflectance Fusion Model) de (Gao et al., 2006). Les résultats de fusion ont alors été validés avec des images de référence simulées et les cartes d’hydrosols estimées à partir de ces images ont été comparées aux cartes utilisées en entrée des simulations. Pour améliorer le SNR des images FCI, un filtrage temporel a été proposé. Enfin, comme le but est d’obtenir des indicateurs de qualité de l’eau, nous avons testé les méthodes de fusion sur les cartes d’hydrosols estimées à partir des images FCI et OLCI simulées. / The objective of this thesis was to propose, validate and compare fusion methods of images provided by a Low Earth Orbit multispectral sensor and a geostationary multispectral sensor in order to obtain water composition maps with spatial details and high temporal resolution. Our methodology was applied to OLCI Low Earth Orbit sensor on Sentinel-3 and FCI Geostationary Earth Orbit (GEO) sensor on Meteosat Third Generation. Firstly, the sensor sensivity, regarding the water color, was analyzed. As the images from both sensors were not available, they were simulated on the Golf of Lion, thanks to hydrosol maps (chl, SPM and CDOM) and radiative transfer models (Hydrolight and Modtran). Two fusion methods were then adapted and tested with the simulated images: the SSTF (Spatial, Spectral, Temporal Fusion) method inspired from the method developed by (Vanhellemont et al., 2014)) and the STARFM (Spatial Temporal Adaptative Reflectance Fusion Model) method from (Gao et al., 2006)). The fusion results were then validated with the simulated reference images and by estimating the hydrosol maps from the fusion images and comparing them with the input maps of the simulation process. To improve FCI SNR, a temporal filtering was proposed. Finally, as the aim is to obtain a water quality indicator, the fusion methods were adapted and tested on the hydrosol maps estimated with the FCI and OLCI simulated images.
|
Page generated in 0.0948 seconds