• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1715
  • 420
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 26
  • 21
  • 20
  • 15
  • 10
  • 8
  • 7
  • Tagged with
  • 3634
  • 601
  • 435
  • 368
  • 360
  • 359
  • 349
  • 329
  • 328
  • 298
  • 283
  • 265
  • 215
  • 214
  • 213
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Chemical Contaminants in Drinking Water: An Integrated Exposure Analysis

Khanal, Rajesh 26 May 1999 (has links)
The objective of this research is to develop an integrated exposure model, which performs uncertainty analysis of exposure to the entire range of chemical contaminants in drinking water via inhalation, ingestion and dermal sorption. The study is focused on a residential environment. The various water devices considered are shower, bath, bathroom, kitchen faucet, washing machine and the dishwasher. All devices impact inhalation exposure, while showering, bathing and washing hands are considered in the analysis of dermal exposure. A set of transient mass balance equations are solved numerically to predict the concentration profiles of a chemical contaminant for three different compartments in a house (shower, bathroom and main house). Inhalation exposure is computed by combining this concentration profile with the occupancy and activity patterns of a specific individual. Mathematical models of dermal penetration, which account for steady and non-steady state analysis, are used to estimate exposure via dermal absorption. Mass transfer coefficients are used to compute the fraction of contaminant remaining in water at the time of ingestion before estimating ingestion exposure. Three chemical contaminant in water: chloroform, chromium and methyl parathion are considered for detailed analysis. These contaminants cover a wide range in chemical properties. The magnitude of overall exposure and comparison of the relative contribution of individual exposure pathways for each contaminant is evaluated. The major pathway of exposure for chloroform is inhalation, which accounts for 2/3rd of the total exposure. Dermal absorption and ingestion exposures contribute almost equally to the remaining 1/3rd of total exposure for chloroform. Ingestion accounts for about 60% of total exposure for methyl parathion and the remaining 40% of exposure is via dermal sorption. Nearly all of the total exposure (98%) for chromium is via the ingestion pathway. / Master of Science
352

Uncertainty relations in terms of the Gini index for finite quantum systems

Vourdas, Apostolos 29 May 2020 (has links)
Yes / Lorenz values and the Gini index are popular quantities in Mathematical Economics, and are used here in the context of quantum systems with finite-dimensional Hilbert space. They quantify the uncertainty in the probability distribution related to an orthonormal basis. It is shown that Lorenz values are superadditive functions and the Gini indices are subadditive functions. The supremum over all density matrices of the sum of the two Gini indices with respect to position and momentum states is used to define an uncertainty coefficient which quantifies the uncertainty in the quantum system. It is shown that the uncertainty coefficient is positive, and an upper bound for it is given. Various examples demonstrate these ideas.
353

Cooperative Prediction and Planning Under Uncertainty for Autonomous Robots

Nayak, Anshul Abhijit 11 October 2024 (has links)
Autonomous robots are set to become ubiquitous in the future, with applications ranging from autonomous cars to assistive household robots. These systems must operate in close proximity of dynamic and static objects, including humans and other non-autonomous systems, adding complexity to their decision-making processes. The behaviour of such objects is often stochastic and hard to predict. Making robust decisions under such uncertain scenarios can be challenging for these autonomous robots. In the past, researchers have used deterministic approach to predict the motion of surrounding objects. However, these approaches can be over-confident and do not capture the stochastic behaviour of surrounding objects necessary for safe decision-making. In this dissertation, we show the importance of probabilistic prediction of surrounding dynamic objects and their incorporation into planning for safety-critical decision making. We utilise Bayesian inference models such as Monte Carlo dropout and deep ensemble to probabilistically predict the motion of surrounding objects. Our probabilistic trajectory forecasting model showed improvement over standard deterministic approaches and could handle adverse scenarios such as sensor noise and occlusion during prediction. The uncertainty-inclusive prediction of surrounding objects has been incorporated into planning. The inclusion of predicted states of surrounding objects with associated uncertainty enables the robot make proactive decisions while avoiding collisions. / Doctor of Philosophy / In future, humans will greatly rely on the assistance of autonomous robots in helping them with everyday tasks. Drones to deliver packages, cars for driving to places autonomously and household robots helping with day-to-day activities. In all such scenarios, the robot might have to interact with their surrounding, in particular humans. Robots working in close proximity to humans must be intelligent enough to make safe decisions not affecting or intruding the human. Humans, in particular make abrupt decisions and their motion can be unpredictable. It is necessary for the robot to understand the intention of human for navigating safely without affecting the human. Therefore, the robot must capture the uncertain human behaviour and predict its future motion so that it can make proactive decisions. We propose to capture the stochastic behaviour of humans using deep learning based prediction models by learning motion patterns from real human trajectories. Our method not only predicts future trajectory of humans but also captures the associated uncertainty during prediction. In this thesis, we also propose how to predict human motion under adverse scenarios like bad weather leading to noisy sensing as well as under occlusion. Further, we integrate the predicted stochastic behaviour of surrounding humans into the planning of the robot for safe navigation among humans.
354

Particle Image Velocimetry Sensitivity Analysis Using Automatic Differentiation

Grullon Varela, Rodolfo Antonio 12 1900 (has links)
A particle image velocimetry (PIV) computer software is analyzed in this work by applying automatic differentiation on it. We create two artificial images that contained particles that where moved with a known velocity field over time. These artificial images were created with parameters that we would have on real PIV experiments. Then we applied a PIV software to find the velocity output vectors. As we mentioned before, we applied automatic differentiation through all the algorithm to track the derivatives of the output vectors regarding interesting parameters declared as inputs. By analyzing these derivatives we analyze the sensitivity of the output vectors to changes on each one of the parameters analyzed. One of the most important derivatives calculated in this project was the derivative of the output regarding the image intensity. In future work we plan to use this derivative combined with the intensity probability distribution of each image pixel, to find PIV uncertainties. If we achieve this goal we will find an uncertainty method that will save computational power and will give uncertainty values with computer accuracy.
355

Ökologische Bewertung von zentralen und dezentralen Abwasserentsorgungssystemen

Schubert, Rebecca 09 May 2014 (has links) (PDF)
Die zentralen Abwasserentsorgungssysteme sind aufgrund der langen Nutzungsdauern und der Leistungsgebundenheit unflexibel gegenüber sich ändernden Rahmenbedingungen, wie dem demografischen Wandel, dem Klimawandel und einem sinkenden Trinkwasserverbrauch. Kleinkläranlagen werden daher häufig als eine Alternative diskutiert, vor allem wenn es um die Erschließung von neuen dezentral gelegenen Grundstücken geht, da eventuell höhere Kosten für das neue Kanalnetz zur zentralen Kläranlage anfallen. Um sich für eine der beiden Möglichkeiten zu entscheiden, werden bisher vorrangig ökonomische Instrumente eingesetzt. Ökologische Bewertungsmethoden finden bislang als Entscheidungsinstrument noch keine Anwendung. In dieser Arbeit wird exemplarisch eine Ökobilanzierung durchgeführt. Sie kann jedoch nur dann zur Entscheidungsfindung genutzt werden, wenn die Ergebnisse auch robust und zuverlässig sind. Deshalb ist es notwendig mit möglichen Unsicherheiten in der Ökobilanzierung offensiv umzugehen. Aus diesen Problematiken ergibt sich das zentrale Forschungsinteresse der ökologischen Bewertung von dezentralen Abwasserentsorgungssystemen im Vergleich zu zentralen Systemen. Dabei fließen die Abhängigkeit der Länge des Kanalnetzes und die Anzahl der angeschlossenen Einwohner, unter besonderer Berücksichtigung von Datenunsicherheiten bei der Ökobilanzierung, ein. Es wird die Methode der Ökobilanzierung angewendet, welche sich nach DIN EN ISO 14040 (2009) und DIN EN ISO 14044 (2006) richtet. Die Literaturrecherche ergibt, dass wenig Literatur über Kleinkläranlagen zur Erstellung einer Ökobilanz zur Verfügung steht. Deshalb wird ein Fragebogen an Hersteller von Kleinkläranlagen versendet. Auf Grundlage der erhobenen Daten wird eine Ökobilanz für eine SBR-Anlage und eine Rotationstauchkörperanlage erstellt. Um die Breite der Daten zu erfassen, wird die Szenarioanalyse verwendet. Dagegen stammen die Daten für die zentrale Anlage aus der EcoIvent-Datenbank für die Abwasserentsorgung (DOKA, G. (2007)), wofür weitere Szenarien erstellt werden. Die funktionelle Einheit sind vier Einwohnerwerte. Nach Aufstellung der Sachbilanz, wird die Methode „CML 2 baseline 2000“ zur Wirkungsabschätzung in SimaPro 7.1 eingesetzt. Mit einer Break-Even-Analyse werden die Abhängigkeiten von der Länge des Kanalnetzes und der angeschlossenen Einwohner für die Abwasserentsorgungssysteme untersucht. Die Integration der Datenunsicherheit erfolgt mittels einer Sensitivitäts- und einer Monte Carlo-Analyse. Die Analysen ergeben, dass sich die Installation einer Kleinkläranlage eher lohnt, wenn die kommunale Kläranlage weiter vom Haushalt entfernt und die Anzahl der angeschlossenen Einwohner gering ist. Eine SBR-Anlage ist einer Rotationstauchkörperanlage vorzuziehen. Die verwendeten Methoden zum Umgang mit Unsicherheiten zeigen, dass sich Ökobilanzen als Entscheidungsinstrument eignen. Die Anwendung von Methoden zur Vermeidung bzw. Eingliederung von Unsicherheiten muss noch viel stärker in die Ökobilanzen von Abwasserentsorgungssystemen einfließen. / Next to municipal sewage treatment plants for sewage treatment, decentralized small wastewater treatment plants can be applied. Given a long working life and a dependence on user numbers, central sewage systems are inflexible in respect to overall conditions such as demography, climate or declining water consumption. Small wastewater treatment plants in rural areas can be an alternative, especially in the case of new plots for buildings because of new expensive sewer grids. Economic methodologies help to choose one of these facilities whereas ecological assessments are neglected. This thesis focuses a life cycle assessment (LCA), but this can only been used if the results are robust and reliable. Hence it is necessary to analyze uncertainty factors of life cycle assessments. The following main objective arises: The ecological assessment of small wastewater treatment plants in comparison to a municipal sewage treatment plant as a function of the length of sewer grid and the number of connected inhabitants to the central sewer system in consideration of dealing with data uncertainty of LCA. A simplified LCA study, which is based on the DIN EN ISO 14040 (2006) and the DIN EN ISO 14044 (2006), is applied to capture this objective, which means in detail the analysis of environmental effects during the production, operation, and disposal phase. As the literature review has shown, available data on small wastewater treatment plants is nearly non-existent. Hence, conducting a survey among the producers of small wastewater treatment plants completes the less available literature. Based on the questionnaire-data, a LCA for of sequencing batch reactor (SBR) plant and rotating biological contactor plant is applied. The scenario analysis captures the extensive amount of the available data adequately. In contrast, the data for the central sewage treatment plant comes from the EcoInvent-Database for Wastewater Treatment (DOKA, G. (2007)) and is further used for other scenarios. The functional unit consists of four population equivalents. After collecting all data for the Life Cycle Inventory Analysis, the estimation of effectiveness is done by applying the “CML 2 baseline 2000” method implemented in SimaPro 7. By means of Break-Even-Analysis the dependence on the length of sewer grid and the number of inhabitants connected to the sewer grid is analyzed. To explicitly incorporate the uncertainty in LCA the Sensitivity analysis and the Monte Carlo-Analysis is used. The most important result is that the higher the distance between the household and the central sewage treatment plant and the lower the number of connected inhabitance, the more preferable is the small wastewater treatment plant. Furthermore, SBRs are generally preferred in comparison to rotating biological contactor plants. Concerning the treatment of uncertainty factors, the analysis of different methods confirms the LCA as the most accurate variant. Further requirements of research are in respect to prevent uncertainty factors or integrate them in LCAs of sewerage systems.
356

Conquering Variability for Robust and Low Power Designs

Sun, Jin January 2011 (has links)
As device feature sizes shrink to nano-scale, continuous technology scaling has led to a large increase in parameter variability during semiconductor manufacturing process. According to the source of uncertainty, parameter variations can be classified into three categories: process variations, environmental variations, and temporal variations. All these variation sources exert significant influences on circuit performance, and make it more challenging to characterize parameter variability and achieve robust, low-power designs. The scope of this dissertation is conquering parameter variability and successfully designing efficient yet robust integrated circuit (IC) systems. Previous experiences have indicated that we need to tackle this issue at every design stage of IC chips. In this dissertation, we propose several robust techniques for accurate variability characterization and efficient performance prediction under parameter variations. At pre-silicon verification stage, a robust yield prediction scheme under limited descriptions of parameter uncertainties, a robust circuit performance prediction methodology based on importance of uncertainties, and a robust gate sizing framework by ElasticR estimation model, have been developed. These techniques provide possible solutions to achieve both prediction accuracy and computation efficiency in early design stage. At on-line validation stage, a dynamic workload balancing framework and an on-line self-tuning design methodology have been proposed for application-specific multi-core systems under variability-induced aging effects. These on-line validation techniques are beneficial to alleviate device performance degradation due to parameter variations and extend device lifetime.
357

A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

Akram, Muhammad Farooq 28 March 2012 (has links)
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to-be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system, make it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
358

An optimal framework of investment strategy in brownfields redevelopment by integrating site-specific hydrogeological and financial uncertainties

Yu, Soonyoung January 2009 (has links)
Brownfields redevelopment has been encouraged by governments or the real estate market because of economic, social and environmental benefits. However, uncertainties in contaminated land redevelopment may cause massive investment risk and need to be managed so that contaminated land redevelopment is facilitated. This study was designed to address hydrogeological as well as economic uncertainty in a hypothetical contaminated land redevelopment project and manage the risk from these uncertainties through the integration of the hydrogeological and economic uncertainties. Hydrogeological uncertainty is derived from incomplete site information, including aquifer heterogeneity, and must be assessed with scientific expertise, given the short history of redevelopment projects and their unique hydrogeological characteristics. Hydrogeological uncertainty has not yet been incorporated in one framework with the economic uncertainty that has been relatively well observed in financial markets. Two cases of Non-Aqueous Phase Liquid (NAPL) contamination were simulated using a physically-based hydrogeological model to address hydrogeological uncertainty: one concerns the effect of an ethanol spill on a light NAPL (LNAPL) contaminated area in the vadose zone, and the other is regarding the vapour phase intrusion of volatile organic compounds, in particular, Trichloroethylene (TCE), a dense NAPL (DNAPL), into indoor air through a variably saturated heterogeneous aquifer. The first simulation replicated experimental observations in the laboratory, such as the capillary fringe depressing and the NAPL pool remobilizing and collecting in a reduced area exhibiting higher saturations than observed prior to an ethanol injection. However, the data gap, in particular, on the chemical properties between the model and the experiment caused the uncertainty in the model simulation. The second NAPL simulation has been performed based on a hypothetical scenario where new dwellings in a redeveloped area have the potential risk of vapour phase intrusion from a subsurface source into indoor air because remediation or foundation design might fail. The simulation results indicated that the aquifer heterogeneity seemed the most significant factor controlling the indoor air exposure risk from a TCE source in the saturated zone. Then, the exposure risk was quantified using Monte Carlo simulations with 50 statistically equivalent heterogeneous aquifer permeability fields. The quantified risk (probability) represents the hydrogeological uncertainty in the scenario and gives the information on loss occurrence intensity of redevelopment failure. Probability of failure (or loss occurrence intensity) was integrated with cost of failure (or loss magnitude) to evaluate the risk capital in the hypothetical brownfields redevelopment project. The term “risk capital” is adopted from financial literature and is the capital you can lose from high risk investment. Cost of failure involves economic uncertainty and can be defined based on a developer’s financial agreement with new dwellers to prevent litigation in the case of certain events, such as an environmental event where indoor air concentrations of pollutants exceed regulatory limits during periodic inspections. The developer makes such a financial agreement with new dwellers because new dwellings have been constructed founded on flawed site information, and municipalities may require it if a land use planning approval is required. An agreement was presumed that the developer would repurchase the affected houses from new dwellers immediately, if indoor air contamination exceeded the regulatory limit. Furthermore, the developer would remediate any remaining contamination, demolish the affected houses and build new houses if they were worth investing in. With this financial plan assumed, the stochastic housing price, stochastic inflation rate and stochastic interest rate have been considered to cause the uncertainty in the cost of failure, and the information on these stochastic variables was obtained from the financial market due to its long history of observations. This research reviewed appropriate risk capital valuation methods for hydrogeologists to apply straightforwardly to their projects, with integrating probability of failure (hydrogeological uncertainty) and cost of failure (economic uncertainty). The risk capital is essentially the probability of failure times the cost of failure with safety loading added to compensate investors against hydrogeological and financial uncertainty. Fair market prices of risk capital have been valuated using financial mathematics and actuarial premium calculations, and each method has a specific safety loading term to reflect investors’ level of risk aversion. Risk capital results indicated that the price of the risk capital was much more sensitive to hydrogeological uncertainty than financial uncertainty. Developers can manage the risk capital by saving a contingency fee for future events or paying an insurance premium, given that the price of this risk capital is the price of a contingent claim, subsequent to failure in remediation or in foundation design, and equivalent to an environmental insurance premium if there is an insurance company to indemnify the liability for the developer. The optimal framework of investment strategy in brownfields redevelopment can be built by linkage of addressing and integrating uncertainties and valuating risk capital from the uncertainties. This framework involves balancing the costs associated with each step while maximizing a net profit from land redevelopment. The optimal investment strategy, such as if or when to remediate or redevelop and to what degree, is given when the future price of the land minus time and material costs as well as the contingency fee or insurance premium maximizes a net profit.
359

Three essays on fair division and decision making under uncertainty

Xue, Jingyi 16 September 2013 (has links)
The first chapter is based on a paper with Jin Li in fair division. It was recently discovered that on the domain of Leontief preferences, Hurwicz (1972)'s classic impossibility result does not hold; that is, one can find efficient, strategy-proof and individually rational rules to divide resources among agents. Here we consider the problem of dividing l divisible goods among n agents with the generalized Leontief preferences. We propose and characterize the class of generalized egalitarian rules which satisfy efficiency, group strategy-proofness, anonymity, resource monotonicity, population monotonicity, envy-freeness and consistency. On the Leontief domain, our rules generalize the egalitarian-equivalent rules with reference bundles. We also extend our rules to agent-specific and endowment-specific egalitarian rules. The former is a larger class of rules satisfying all the previous properties except anonymity and envy-freeness. The latter is a class of efficient, group strategy-proof, anonymous and individually rational rules when the resources are assumed to be privately owned. The second and third chapters are based on two working papers of mine in decision making under uncertainty. In the second chapter, I study the wealth effect under uncertainty --- how the wealth level impacts a decision maker's degree of uncertainty aversion. I axiomatize a class of preferences displaying decreasing absolute uncertainty aversion, which allows a decision maker to be more willing to take uncertainty-bearing behavior when he becomes wealthier. Three equivalent preference representations are obtained. The first is a variation on the constraint criterion of Hansen and Sargent (2001). The other two respectively generalize Gilboa and Schmeidler (1989)'s maxmin criterion and Maccheroni, Marinacci and Rustichini (2006)'s variational representation. This class, when restricted to preferences exhibiting constant absolute uncertainty aversion, is exactly Maccheroni, Marinacci and Rustichini (2006)'s ariational preferences. Thus, the results further enable us to establish relationships among the representations for several important classes within variational preferences. The three representations provide different decision rules to rationalize the same class of preferences. The three decision rules correspond to three ways which are proposed in the literature to identify a decision maker's perception about uncertainty and his attitude toward uncertainty. However, I give examples to show that these identifications conflict with each other. It means that there is much freedom in eliciting two unobservable and subjective factors, one's perception about and attitude toward uncertainty, from only his choice behavior. This exactly motivates the work in Chapter 3. In the third chapter, I introduce confidence orders in addition to preference orders. Axioms are imposed on both orders to reveal a decision maker's perception about uncertainty and to characterize the following decision rule. A decision maker evaluates an act based on his aspiration and his confidence in this aspiration. Each act corresponds to a trade-off line between the two criteria: The more he aspires, the less his confidence in achieving the aspiration level. The decision maker ranks an act by the optimal combination of aspiration and confidence on its trade-off line according to an aggregating preference of his over the two-criterion plane. The aggregating preference indicates his uncertainty attitude, while his perception about uncertainty is summarized by a generalized second-order belief over the prior space, and this belief is revealed by his confidence order.
360

Bayesian calibration of building energy models for energy retrofit decision-making under uncertainty

Heo, Yeonsook 10 November 2011 (has links)
Retrofitting of existing buildings is essential to reach reduction targets in energy consumption and greenhouse gas emission. In the current practice of a retrofit decision process, professionals perform energy audits, and construct dynamic simulation models to benchmark the performance of existing buildings and predict the effect of retrofit interventions. In order to enhance the reliability of simulation models, they typically calibrate simulation models based on monitored energy use data. The calibration techniques used for this purpose are manual and expert-driven. The current practice has major drawbacks: (1) the modeling and calibration methods do not scale to large portfolio of buildings due to their high costs and heavy reliance on expertise, and (2) the resulting deterministic models do not provide insight into underperforming risks associated with each retrofit intervention. This thesis has developed a new retrofit analysis framework that is suitable for large-scale analysis and risk-conscious decision-making. The framework is based on the use of normative models and Bayesian calibration techniques. Normative models are light-weight quasi-steady state energy models that can scale up to large sets of buildings, i.e. to city and regional scale. In addition, they do not require modeling expertise since they follow a set of modeling rules that produce a standard measure for energy performance. The normative models are calibrated under a Bayesian approach such that the resulting calibrated models quantify uncertainties in the energy outcomes of a building. Bayesian calibration models can also incorporate additional uncertainties associated with retrofit interventions to generate probability distributions of retrofit performance. Probabilistic outputs can be straightforwardly translated into a measure that quantifies underperforming risks of retrofit interventions and thus enable decision making relative to the decision-makers' rational objectives and risk attitude. This thesis demonstrates the feasibility of the new framework on retrofit applications by verifying the following two hypotheses: (1) normative models supported by Bayesian calibration have sufficient model fidelity to adequately support retrofit decisions, and (2) they can support risk-conscious decision-making by explicitly quantifying risks associated with retrofit options. The first and second hypotheses are examined through case studies that compare outcomes from the calibrated normative model with those from a similarly calibrated transient simulation model and compare decisions derived by the proposed framework with those derived by standard practices respectively. The new framework will enable cost-effective retrofit analysis at urban scale with explicit management of uncertainties.

Page generated in 0.0196 seconds