Spelling suggestions: "subject:"incertainty,"" "subject:"ncertainty,""
621 |
Essays in inventory decisions under uncertaintyManikas, Andrew Steven 31 March 2008 (has links)
Uncertainty is a norm in business decisions. In this research, we focus on the inventory decisions for companies with uncertain customer demands. We first investigate forward buying strategies for single stage inventory decisions. The situation is common in commodity industry where prices often fluctuate significantly from one purchasing opportunity to the next and demands are random. We propose a combined heuristic to determine the optimal number of future periods a firm should purchase at each ordering opportunity in order to maximize total expected profit when there is uncertainty in future demand and future buying price. Second, we study the complexities of bundling of products in an Assemble-To-Order (ATO) environment. We outline a salvage manipulator mechanism that coordinates the decentralized supply chain. Third, we extend our salvage manipulator mechanism to a two stage supply chain with a long cumulative lead time. With significant lead times, the assumption that the suppliers all see the same demand distribution as the retailer cannot be used.
|
622 |
Design with Uncertain Technology EvolutionArendt, Jonathan Lee 2012 August 1900 (has links)
Design is an uncertain human activity involving decisions with uncertain outcomes. Sources of uncertainty in product design include uncertainty in modeling methods, market preferences, and performance levels of subsystem technologies, among many others. The performance of a technology evolves over time exhibiting improving performance as research and development efforts continue. As the performance of a technology in the future is uncertain, quantifying the evolution of these technologies poses a challenge in making design decisions. Designing systems involving evolving technologies is a poorly understood problem. The objective of this research is to create a computational method allowing designers to make decisions encompassing the evolution of technology. Techniques for modeling evolution of a technology that has multiple performance attributes are developed. An S-curve technology evolution model is used. The performance of a technology develops slowly at first, quickly during heavy R&D effort, and slowly again as the performance approaches its limits. Pareto frontiers represent the set of optimal solutions that the decision maker can select from. As the performance of a technology develops, the Pareto frontier shifts to a new location. The assumed S-curve form of technology development allows the designer to apply the uncertainty of technology development directly to the S-curve evolution model rather than applying the uncertainty to the performance, giving a more focused application of uncertainty in the problem. Monte Carlo simulations are used to the propagate uncertainty through the decision. The decision-making methods give designers greater insight when making long-term decisions regarding evolving technologies. The scenario of an automotive manufacturing firm entering the electric vehicle market deciding which battery technology to include in their new line of electric cars is used to demonstrate the decision-making method. Another scenario of a wind turbine energy company deciding which technology to invest in demonstrates a more sophisticated technology evolution modeling technique and the decision making under uncertainty method.
|
623 |
Quantifying pollutant spreading and the risk of water pollution in hydrological catchments : A solute travel time-based scenario approachPersson, Klas January 2011 (has links)
The research presented in the thesis develops an approach for the estimation and mapping of pollutant spreading in catchments and the associated uncertainty and risk of pollution. The first step in the approach is the quantification and mapping of statistical and geographical distributions of advective solute travel times from pollutant input locations to downstream recipients. In the second step the travel time distributions are used to quantify and map the spreading of specific pollutants and the related risk of water pollution. In both steps, random variability of transport properties and processes is accounted for within a probabilistic framework, while different scenarios are used to account for statistically unquantifiable uncertainty about system characteristics, processes and future developments. This scenario approach enables a transparent analysis of uncertainty effects that is relatively easy to interpret. It also helps identify conservative assumptions and pollutant situations for which further investigations are most needed in order to reduce the uncertainty. The results for different investigated scenarios can further be used to assess the total risk to exceed given water quality standards downstream of pollutant sources. Specific thesis results show that underestimation of pollutant transport variability, and in particular of those transport pathways with much shorter than average travel times, may lead to substantial underestimation of pollutant spreading in catchment areas. By contrast, variations in pollutant attenuation rate generally lead to lower estimated spreading than do constant attenuation conditions. A scenario of constant attenuation rate and high travel time variability, with a large fraction of relatively short travel times, therefore appears to be a reasonable conservative scenario to use when information is lacking for more precise determination of actual transport and attenuation conditions.
|
624 |
Valid estimation and prediction inference in analysis of a computer modelNagy, Béla 11 1900 (has links)
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the
past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output.
Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments".
The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried
locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response
surface of the original computer model.
One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make
assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able
to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable.
In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching
coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
|
625 |
Certainties and uncertainties : ethics and professional identities of early childhood educatorsThomas, Louise M. January 2009 (has links)
This study is an inquiry into the professional identity constructions of early childhood educators, where identity is conceptualised as social and contextual. Through a genealogical analysis of narratives of four Queensland early childhood teachers, the thesis renders as problematic universal and fixed notions of what it is to be an early childhood professional. The data are the four teachers’ professional life history narratives recounted through a series of conversational interviews with each participant. As they spoke about professionalism and ethics, these teachers struggled to locate themselves as professionals, as they drew on a number of dominant discourses available to them. These dominant discourses were located and mapped through analysis of the participants’ talk about relationships with parents, colleagues and authorities. Genealogical analysis enabled multiple readings of the ways in which the participants’ talk held together certainties and uncertainties, as they recounted their experiences and spoke of early childhood expertise, relational engagement and ethics. The thesis concludes with suggestions for ways to support early childhood teachers and pre-service teachers to both engage with and resist normative processes and expectations of professional identity construction. In so doing, multiple and contextual opportunities can be made available when it comes to being professional and ‘doing’ ethics. The thesis makes an argument for new possibilities for thinking and speaking professional identities that include both certainty and uncertainty, comfort and discomfort, and these seemingly oppositional terms are held together in tension, with an insistence that both are necessary and true. The use of provocations offers tools through which pre-service teachers, teachers and teacher educators can access new positions associated with certainties and uncertainties in professional identities. These new positions call for work that supports experiences of ‘de-comfort’ – that is, experiences that encourage early childhood educators to step away from the comfort zones that can become part of expertise, professional relationships and ethics embedded within normative representations of what it is to be an early childhood professional.
|
626 |
Characterising the uncertainty in potential large rapid changes in wind power generationCutler, Nicholas Jeffrey, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2009 (has links)
Wind energy forecasting can facilitate wind energy integration into a power system. In particular, the management of power system security would benefit from forecast information on plausible large, rapid change in wind power generation. Numerical Weather Prediction (NWP) systems are presently the best available tools for wind energy forecasting for projection times between 3 and 48 hours. In this thesis, the types of weather phenomena that cause large, rapid changes in wind power in southeast Australia are classified using observations from three wind farms. The results show that the majority of events are due to horizontal propagation of spatial weather features. A study of NWP systems reveals that they are generally good at forecasting the broad large-scale weather phenomena but may misplace their location relative to the physical world. Errors may result from developing single time-series forecasts from a single NWP grid point, or from a single interpolation of proximate grid points. This thesis presents a new approach that displays NWP wind forecast information from a field of multiple grid points around the wind farm location. Displaying the NWP wind speeds at the multiple grid points directly would potentially be misleading as they each reflect the estimated local surface roughness and terrain at a particular grid point. Thus, a methodology was developed to convert the NWP wind speeds at the multiple grid points to values that reflect surface conditions at the wind farm site. The conversion method is evaluated with encouraging results by visual inspection and by comparing with an NWP ensemble. The multiple grid point information can also be used to improve downscaling results by filtering out data where there is a large chance of a discrepancy between an NWP time-series forecast and observations. The converted wind speeds at multiple grid points can be downscaled to site-equivalent wind speeds and transformed to wind farm power assuming unconstrained wind farm operation at one or more wind farm sites. This provides a visual decision support tool that can help a forecast user assess the possibility of large, rapid changes in wind power from one or more wind farms.
|
627 |
Stochastic analysis and robust design of stiffened composite structuresLee, Merrill Cheng Wei, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The European Commission 6th Framework Project COCOMAT (Improved MATerial Exploitation at Safe Design of COmposite Airframe Structures by Accurate Simulation of COllapse) was a four and a half year project (2004 to mid-2008) aimed at exploiting the large reserve of strength in composite structures through more accurate prediction of collapse. In the experimental work packages, significant statistical variation in buckling behaviour and ultimate loading were encountered. The variations observed in the experimental results were not predicted in the finite element analyses that were done in the early stages of the project. The work undertaken in this thesis to support the COCOMAT project was initiated when it was recognised that there was a gap in knowledge about the effect of initial defects and variations in the input variables of both the experimental and simulated panels. The work involved the development of stochastic algorithms to relate variations in boundary conditions, material properties and geometries to the variation in buckling modes and loads up to first failure. It was proposed in this thesis that any future design had to focus on the dominant parameters affecting the statistical scatter in the results to achieve lower sensitivity to variation. A methodology was developed for designing stiffened composite panels with improved robustness. Several panels tested in the COCOMAT project were redesigned using this approach to demonstrate its applicability. The original contributions from this thesis are therefore the development of a stochastic methodology to identify the impact of variation in input parameters on the response of stiffened composite panels and the development of Robust Indices to support the design of new panels. The stochastic analysis included the generation of metamodels that allow quantification of the impact that the inputs have on the response using two first order variables, Influence and Sensitivity. These variables are then used to derive the Robust Indices. A significant outcome of this thesis was the recognition in the final report for COCOMAT that the development of a validated robust index should be a focus of any future design of postbuckling stiffened panels.
|
628 |
Decision-making under spatial uncertaintyHope, Susannah Jayne January 2005 (has links) (PDF)
Errors are inherent to all spatial datasets and give rise to a level of uncertainty in the final product of a geographic information system (GIS). There is growing recognition that the uncertainty associated with spatial information should be represented to users in a comprehensive and unambiguous way. However, the effects on decision-making of such representations have not been thoroughly investigated. Studies from the psychological literature indicate decision-making biases when information is uncertain. This study explores the effects of representing spatial uncertainty, through an examination of how decision-making may be affected by the introduction of thematic uncertainty and an investigation of the effects of different representations of positional uncertainty on decision-making. / Two case studies are presented. The first of these considers the effects on decision-making of including thematic uncertainty information within the context of an airport siting decision task. An extremely significant tendency to select a zone for which the thematic classification is known to be of high certainty was observed. The reluctance to select a zone for which the thematic classification is of low certainty was strong enough to sometimes lead to decision-making that can only be described as irrational. / The second case study investigates how decision-making may be affected by different representations of positional uncertainty within the context of maritime navigation. The same uncertainty information was presented to participants using four different display methods. Significant differences in their decisions were observed. Strong preferences for certain display methods were also exhibited, with some representations being ranked significantly higher than others. / The findings from these preliminary studies demonstrate that the inclusion of uncertainty information does influence decision-making but does not necessarily lead to better decisions. A bias against information of low certainty was observed, sometimes leading to the making of irrational decisions. In addition, the form of uncertainty representation itself may affect decision-making. Further research into the effects on decision-making of representing spatial uncertainty is needed before it can be assumed that the inclusion of such information will lead to more informed decisions being made.
|
629 |
Stochastic analysis and robust design of stiffened composite structuresLee, Merrill Cheng Wei, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The European Commission 6th Framework Project COCOMAT (Improved MATerial Exploitation at Safe Design of COmposite Airframe Structures by Accurate Simulation of COllapse) was a four and a half year project (2004 to mid-2008) aimed at exploiting the large reserve of strength in composite structures through more accurate prediction of collapse. In the experimental work packages, significant statistical variation in buckling behaviour and ultimate loading were encountered. The variations observed in the experimental results were not predicted in the finite element analyses that were done in the early stages of the project. The work undertaken in this thesis to support the COCOMAT project was initiated when it was recognised that there was a gap in knowledge about the effect of initial defects and variations in the input variables of both the experimental and simulated panels. The work involved the development of stochastic algorithms to relate variations in boundary conditions, material properties and geometries to the variation in buckling modes and loads up to first failure. It was proposed in this thesis that any future design had to focus on the dominant parameters affecting the statistical scatter in the results to achieve lower sensitivity to variation. A methodology was developed for designing stiffened composite panels with improved robustness. Several panels tested in the COCOMAT project were redesigned using this approach to demonstrate its applicability. The original contributions from this thesis are therefore the development of a stochastic methodology to identify the impact of variation in input parameters on the response of stiffened composite panels and the development of Robust Indices to support the design of new panels. The stochastic analysis included the generation of metamodels that allow quantification of the impact that the inputs have on the response using two first order variables, Influence and Sensitivity. These variables are then used to derive the Robust Indices. A significant outcome of this thesis was the recognition in the final report for COCOMAT that the development of a validated robust index should be a focus of any future design of postbuckling stiffened panels.
|
630 |
Dissipativity, optimality and robustness of model predictive control policiesLøvaas, Christian January 2008 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / This thesis addresses the problem of robustness in model predictive control (MPC) of discrete-time systems. In contrast with most previous work on robust MPC, our main focus is on robustness in the face of both imperfect state information and dynamic model uncertainty. For linear discrete-time systems with model uncertainty described by sum quadratic constraints, we propose output-feedback MPC policies that: (i) treat soft constraints using quadratic penalty functions; (ii) respect hard constraints using 'tighter' constraints; and (iii) achieve robust closed-loop stability and non-zero setpoint tracking. Our two main tools are: (1) a new linear matrix inequality condition which parameterizes a class of quadratic MPC cost functions that all lead to robust closed-loop stability; and (2) a new parameterization of soft constraints which has the advantage of leading to optimization problems of prescribable size. The stability test we use for MPC design builds on well-known results from dissipativity theory which we tailor to the case of constrained discrete-time systems. The proposed robust MPC designs are shown to converge to well-known nominal MPC designs as the model uncertainty (description) goes to zero. Furthermore, the present approach to cost function selection is independently motivated by a novel result linking MPC and minimax optimal control theory. Specifically, we show that the considered class of MPC policies are the closed-loop optimal solutions of a particular class of minimax optimal control problems. In addition, for a class of nonlinear discrete-time systems with constraints and bounded disturbance inputs, we propose state-feedback MPC policies that input-to-state stabilize the system. Our two main tools in this last part of the thesis are: (1) a class of N-step affine state-feedback policies; and (2) a result that establishes equivalence between the latter class and an associated class of N-step affine disturbance-feedback policies. Our equivalence result generalizes a recent result in the literature for linear systems to the case when N is chosen to be less than the nonlinear system's 'input-state linear horizon'.
|
Page generated in 0.0794 seconds