• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 222
  • 222
  • 40
  • 35
  • 32
  • 31
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Revised Model for Antibiotic Resistance in a Hospital

Pei, Ruhang 01 May 2015 (has links)
In this thesis we modify an existing model for the spread of resistant bacteria in a hospital. The existing model does not account for some of the trends seen in the data found in literature. The new model takes some of these trends into account. For the new model, we examine issues relating to identifiability, sensitivity analysis, parameter estimation, uncertainty analysis, and equilibrium stability.
52

Simulation-Based Design Under Uncertainty for Compliant Microelectromechanical Systems

Wittwer, Jonathan W. 11 March 2005 (has links)
The high cost of experimentation and product development in the field of microelectromechanical systems (MEMS) has led to a greater emphasis on simulation-based design for increasing first-pass design success and reliability. The use of compliant or flexible mechanisms can help eliminate friction, wear, and backlash, but compliant MEMS are sensitive to variations in material properties and geometry. This dissertation proposes approaches for design stage uncertainty analysis, model validation, and robust optimization of nonlinear compliant MEMS to account for critical process uncertainties including residual stress, layer thicknesses, edge bias, and material stiffness. Methods for simulating and mitigating the effects of non-idealities such joint clearances, semi-rigid supports, non-ideal loading, and asymmetry are also presented. Approaches are demonstrated and experimentally validated using bistable micromechanisms and thermal microactuators as examples.
53

Phase-averaged stereo-PIV flow field and force/moment/motion measurements for surface combatant in PMM maneuvers

Yoon, Hyunse 01 December 2009 (has links)
Towing-tank experiments are performed for a surface combatant as it undergoes static and dynamic planar motion mechanism maneuvers in calm water. The data includes global forces/moment/motions and phase-averaged local flow-fields, and uncertainty assessment. The geometry is DTMB model 5512, which is a 1/46.6 scale geosym of DTMB model 5415, with L = 3.048 m. The experiments are performed in a 3.048 × 3.048 × 100 m towing tank. The measurement system features a custom designed planar motion mechanism, a towed stereoscopic particle image velocimetry system, a Krypton contactless motion tracker, and a 6-component loadcell. The forces/moment and UA are conducted in collaboration with two international facilities (FORCE and INSEAN), including test matrix and overlapping tests using the same model geometry but with different scales. Quality of the data is assessed by monitoring the statistical convergence, including tests for randomness, stationarity, and normality. Uncertainty is assessed following the ASME Standards (1998 and 2005). Hydrodynamic derivatives are determined from the forces/moment data by using the Abkowitz (1966) mathematical model, with two different 'Multiple-Run (MR)' and 'Single-Run (SR)' methods. The results for reconstructions of the forces/moment indicate that usually the MR method is more accurate than the SR. Comparisons are made of the hydrodynamic derivatives across different facilities. The scale effect is small for sway derivatives, whereas considerable for yaw derivatives. Heave, pitch, and roll motions exhibit cross-coupling between the motions and forces and moment data, as expect based on ship motions theory. Hydrodynamic derivatives are compared between different mount conditions. Linear derivatives values are less sensitive to the mounting conditions, whereas the non-linear derivatives are considerably different. Phase-averaged flowfield results indicate maneuvering-induced vortices and their interactions with the turbulent boundary layer. The tests are sufficiently documented and detailed so as to be useful as benchmark EFD data for CFD validation.
54

Stochastic Assessment of Climate-Induced Risk for Water Resources Systems in a Bottom-Up Framework

Alodah, Abdullah 23 October 2019 (has links)
Significant challenges in water resources management arise because of the ever-increasing pressure on the world’s heavily exploited and limited water resources. These stressors include demographic growth, intensification of agriculture, climate variability, and climate change. These challenges to water resources are usually tackled using a top-down approach, which suffers from many limitations including the use of a limited set of climate change scenarios, the lack of methodology to rank these scenarios, and the lack of credibility, particularly on extremes. The bottom-up approach, the recently introduced approach, reverses the process by assessing vulnerabilities of water resources systems to variations in future climates and determining the prospects of such wide range of changes. While it solves some issues of the top-down approach, several issues remain unaddressed. The current project seeks to provide end-users and the research community with an improved version of the bottom-up framework for streamlining climate variability into water resources management decisions. The improvement issues that are tackled are a) the generation of a sufficient number of climate projections that provide better coverage of the risk space; b) a methodology to quantitatively estimate the plausibility of a future desired or undesired outcome and c) the optimization of the size of the projections pool to achieve the desired precision with the minimum time and computing resources. The results will hopefully help to cope with the present-day and future challenges induced mainly by climate. In the first part of the study, the adequacy of stochastically generated climate time series for water resources systems risk and performance assessment is investigated. A number of stochastic weather generators (SWGs) are first used to generate a large number of realizations (i.e. an ensemble of climate outputs) of precipitation and temperature time series. Each realization of the generated climate time series is then used individually as an input to a hydrological model to obtain streamflow time series. The usefulness of weather generators is evaluated by assessing how the statistical properties of simulated precipitation, temperatures, and streamflow deviate from those of observations. This is achieved by plotting a large ensemble of (1) synthetic precipitation and temperature time series in a Climate Statistics Space (CSS), and (2) hydrological indices using simulated streamflow data in a Risk and Performance Indicators Space (RPIS). The performance of the weather generator is assessed using visual inspection and the Mahalanobis distance between statistics derived from observations and simulations. A case study was carried out using five different weather generators: two versions of WeaGETS, two versions of MulGETS and the k-nearest neighbor weather generator (knn). In the second part of the thesis, the impacts of climate change, on the other hand, was evaluated by generating a large number of representative climate projections. Large ensembles of future series are created by perturbing downscaled regional climate models’ outputs with a stochastic weather generator, then used as inputs to a hydrological model that was calibrated using observed data. Risk indices calculated with the simulated streamflow data are converted into probability distributions using Kernel Density Estimations. The results are dimensional joint probability distributions of risk-relevant indices that provide estimates of the likelihood of unwanted events under a given watershed configuration and management policy. The proposed approach offers a more complete vision of the impacts of climate change and opens the door to a more objective assessment of adaptation strategies. The third part of the thesis deals with the estimation of the optimal size of SWG realizations needed to calculate risk and performance indices. The number of realizations required to reach is investigated utilizing Relative Root Mean Square Error and Relative Error. While results indicate that a single realization is not enough to adequately represent a given stochastic weather generator, results generally indicate that there is no major benefit of generating more than 100 realizations as they are not notably different from results obtained using 1000 realizations. Adopting a smaller but carefully chosen number of realizations can significantly reduce the computational time and resources and therefore benefit a larger audience particularly where high-performance machines are not easily accessible. The application was done in one pilot watershed, the South Nation Watershed in Eastern Ontario, yet the methodology will be of interest for Canada and beyond. Overall, the results contribute to making the bottom-up more objective and less computationally intensive, hence more attractive to practitioners and researchers.
55

Uncertainty Analysis of Mechanical Properties from Miniature Tensile Testing of High Strength Steels

Malpally, Deepthi Rao 01 May 2014 (has links)
This Miniature mechanical testing study is concerned with the use of miniature specimens to identify the mechanical properties of stainless steel Type 304, sensitized Type 304 and SA516 Grade 70 carbon steel as a viable replacement for the standard sized mechanical testing. The study aims at obtaining suitable specimen geometry and tensile testing proce- dure for miniature mechanical testing whose mechanical properties are comparable to that of conventional specimens of ASTM A370-10 of the same steel. All specimens are at and the gauge length cross section will be varied to obtain suitable geometry. The miniature tensile testing results are further validated by using Monte Carlo Method (MCM) for uncertainty estimation in order to know the probability distribution of mechanical properties. Miniature specimens with a cross section of 3 mm2 and 12 mm gauge length are found to produce equiva- lent mechanical properties as tested from standard-sized specimens. If a reasonable agreement is received, it will provide us with a very useful tool to evaluate mechanical properties of de- graded materials, which cannot be removed from service for standard testing, for repair and service life evaluation.
56

Integrated Systems Modeling to Improve Watershed Habitat Management and Decision Making

Alafifi, Ayman H. 01 May 2018 (has links)
Regulated rivers provide opportunities to improve habitat quality by managing the times, locations, and magnitudes of reservoir releases and diversions across the watershed. To identify these opportunities, managers select priority species and determine when, where, and how to allocate water between competing human and environmental users in the basin. Systems models have been used to recommend allocation of water between species. However, many models consider species’ water needs as constraints on instream flow that is managed to maximize human beneficial uses. Many models also incorporate uncertainty in the system and report an overwhelmingly large number of management alternatives. This dissertation presents three new novel models to recommend the allocation of water and money to improve habitat quality. The new models also facilitate communicating model results to managers and to the public. First, a new measurable and observable habitat metric quantifies habitat area and quality for priority aquatic, floodplain, and wetland habitat species. The metric is embedded in a systems model as an ecological objective to maximize. The systems model helps managers to identify times and locations at which to apply scarce water to most improve habitat area and quality for multiple competing species. Second, a cluster analysis approach is introduced to reduce large dimensional uncertainty problems in habitat models and focus management efforts on the important parameters to measure and monitor more carefully. The approach includes manager preferences in the search for clusters. It identifies a few, easy-to-interpret management options from a large multivariate space of possible alternatives. Third, an open-access web tool helps water resources modelers display model outputs on an interactive web map. The tool allows modelers to construct node-link networks on a web map and facilitates sharing and visualizing spatial and temporal model outputs. The dissertation applies all three studies to the Lower Bear River, Utah, to guide ongoing habitat conservation efforts, recommend water allocation strategies, and provide important insights on ways to improve overall habitat quality and area.
57

Efficient Methods for Predicting Soil Hydraulic Properties

Minasny, Budiman January 2000 (has links)
Both empirical and process-simulation models are useful for evaluating the effects of management practices on environmental quality and crop yield. The use of these models is limited, however, because they need many soil property values as input. The first step towards modelling is the collection of input data. Soil properties can be highly variable spatially and temporally, and measuring them is time-consuming and expensive. Efficient methods, which consider the uncertainty and cost of measurements, for estimating soil hydraulic properties form the main thrust of this study. Hydraulic properties are affected by other soil physical, and chemical properties, therefore it is possible to develop empirical relations to predict them. This idea quantified is called a pedotransfer function. Such functions may be global or restricted to a country or region. The different classification of particle-size fractions used in Australia compared with other countries presents a problem for the immediate adoption of exotic pedotransfer functions. A database of Australian soil hydraulic properties has been compiled. Pedotransfer functions for estimating water-retention and saturated hydraulic conductivity from particle size and bulk density for Australian soil are presented. Different approaches for deriving hydraulic transfer functions have been presented and compared. Published pedotransfer functions were also evaluated, generally they provide a satisfactory estimation of water retention and saturated hydraulic conductivity depending on the spatial scale and accuracy of prediction. Several pedotransfer functions were developed in this study to predict water retention and hydraulic conductivity. The pedotransfer functions developed here may predict adequately in large areas but for site-specific applications local calibration is needed. There is much uncertainty in the input data, and consequently the transfer functions can produce varied outputs. Uncertainty analysis is therefore needed. A general approach to quantifying uncertainty is to use Monte Carlo methods. By sampling repeatedly from the assumed probability distributions of the input variables and evaluating the response of the model the statistical distribution of the outputs can be estimated. A modified Latin hypercube method is presented for sampling joint multivariate probability distributions. This method is applied to quantify the uncertainties in pedotransfer functions of soil hydraulic properties. Hydraulic properties predicted using pedotransfer functions developed in this study are also used in a field soil-water model to analyze the uncertainties in the prediction of dynamic soil-water regimes. The use of the disc permeameter in the field conventionally requires the placement of a layer of sand in order to provide good contact between the soil surface and disc supply membrane. The effect of sand on water infiltration into the soil and on the estimate of sorptivity was investigated. A numerical study and a field experiment on heavy clay were conducted. Placement of sand significantly increased the cumulative infiltration but showed small differences in the infiltration rate. Estimation of sorptivity based on the Philip's two term algebraic model using different methods was also examined. The field experiment revealed that the error in infiltration measurement was proportional to the cumulative infiltration curve. Infiltration without placement of sand was considerably smaller because of the poor contact between the disc and soil surface. An inverse method for predicting soil hydraulic parameters from disc permeameter data has been developed. A numerical study showed that the inverse method is quite robust in identifying the hydraulic parameters. However application to field data showed that the estimated water retention curve is generally smaller than the one obtained in laboratory measurements. Nevertheless the estimated near-saturated hydraulic conductivity matched the analytical solution quite well. Th author believes that the inverse method can give a reasonable estimate of soil hydraulic parameters. Some experimental and theoretical problems were identified and discussed. A formal analysis was carried out to evaluate the efficiency of the different methods in predicting water retention and hydraulic conductivity. The analysis identified the contribution of individual source of measurement errors to the overall uncertainty. For single measurements, the inverse disc-permeameter analysis is economically more efficient than using pedotransfer functions or measuring hydraulic properties in the laboratory. However, given the large amount of spatial variation of soil hydraulic properties it is perhaps not surprising that lots of cheap and imprecise measurements, e.g. by hand texturing, are more efficient than a few expensive precise ones.
58

Assessing pesticide leaching at the regional scale : a case study for atrazine in the Dyle catchment

Leterme, Bertrand 14 December 2006 (has links)
The overall objective of this thesis is to better understand and assess pesticide leaching at the regional scale, using both the analysis of monitoring data and spatially distributed modelling. Atrazine contamination of the Brusselian aquifer (central Belgium) is poorly understood. Considerable uncertainty surrounds whether the pollution is agricultural or non-agricultural in origin. The spatial and temporal covariance of atrazine concentrations was studied by fitting semivariogram models to monitoring data. Correlation ranges were found to be 600 metres and 600-700 days. A non-parametric one-way ANOVA found a strong relationship between mean concentrations and land use, whilst other environmental variables were found to be less important. Higher levels of pollution were detected in areas dominated by urban land use suggesting that atrazine residues in groundwater resulted from non-agricultural applications. Modelling pesticide leaching at the regional scale (Dyle catchment) was used to assess groundwater vulnerability. Different approaches to process soil information were tested with both a linear (modified Attenuation Factor) and a non-linear (GeoPEARL) leaching model. The CI (calculate first, interpolate later) and IC (interpolate first, calculate later) approaches were identical for the linear model, but differences in the amount of leaching were found for the non-linear model. The CI approach would be expected to give better results than IC, but the CA (calculate alone) approach is probably the best method if no spatial output is required. Finally, a methodology was developed to quantify the uncertainty arising from the spatial variability of non-georeferenced parameters (i.e. those assumed to be spatially constant in deterministic simulations). A Monte Carlo analysis of atrazine leaching was performed with six pesticide and soil properties as uncertain inputs. Spatial variability of non-georeferenced parameters had a significant influence on the amount of simulated leaching. In the stochastic simulation, concentrations exist above the regulatory level of 0.1 µg/L, while virtually no leaching occurred in the deterministic simulation. Including the spatial variability of substance parameters (half-life, sorption coefficient...) would have significant consequences for future registration policies, especially if risk assessments are implemented in a spatially distributed way.
59

Analysis of flood hazard under consideration of dike breaches

Vorogushyn, Sergiy January 2008 (has links)
River reaches protected by dikes exhibit high damage potential due to strong value accumulation in the hinterland areas. While providing an efficient protection against low magnitude flood events, dikes may fail under the load of extreme water levels and long flood durations. Hazard and risk assessments for river reaches protected by dikes have not adequately considered the fluvial inundation processes up to now. Particularly, the processes of dike failures and their influence on the hinterland inundation and flood wave propagation lack comprehensive consideration. This study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The model was developed and tested on a ca. 91 km heavily diked river reach on the German part of the Elbe River between gauges Torgau and Vockerode. The reach is characterised by low slope and fairly flat extended hinterland areas. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100, 200, 500, 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. In the disaggregated display mode, the dike hazard maps indicate the failure probabilities for each considered breach mechanism. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. Finally, scenarios of polder deployment for the extreme floods with T = 200, 500, 1000 were simulated with IHAM. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management. / Entlang eingedeichter Flussabschnitte kann das Hinterland ein hohes Schadenspotential, aufgrund der starken Akkumulation der Werte, aufweisen. Obwohl Deiche einen effizienten Schutz gegen kleinere häufiger auftretende Hochwässer bieten, können sie unter der Last hoher Wasserstände sowie langer Anstaudauer versagen. Gefährdungs- und Risikoabschätzungsmethoden für die eingedeichten Flussstrecken haben bisher die fluvialen Überflutungsprozesse nicht hinreichend berücksichtigt. Besonders, die Prozesse der Deichbrüche und deren Einfluss auf Überflutung im Hinterland und Fortschreiten der Hochwasserwelle verlangen eine umfassende Betrachtung. Die vorliegende Studie setzt ihren Fokus auf die Entwicklung und Anwendung eines neuen Modellierungssystems, das eine umfassende Hochwassergefährdungsanalyse entlang eingedeichter Flussstrecken unter Berücksichtigung von Deichbrüchen ermöglicht. Das vorgeschlagene Inundation Hazard Assessment Model (IHAM) stellt ein hybrides probabilistisch-deterministisches Modell dar. Es besteht aus drei laufzeitgekoppelten Modellen: (1) einem 1D instationären hydrodynamisch-numerischen Modell für den Flussschlauch und die Vorländer zwischen den Deichen, (2) einem probabilistischen Deichbruchmodell, welches die möglichen Bruchstellen, Breschenbreiten und Breschenausflüsse berechnet, und (3) einem 2D raster-basierten Überflutungsmodell für das Hinterland, das auf dem Speiherzellenansatz und der Diffusionswellengleichung basiert ist. Das probabilistische Deichbruchmodell beschreibt Deichbrüche, die infolge von drei Bruchmechanismen auftreten: dem Überströmen, dem Piping im Deichuntergrund und dem Versagen der landseitigen Böschung als Folge des Sickerflusses und der Erosion im Deichkörper (Mikro-Instabilität). Das 2D Speicherzellenmodell, angetrieben durch den Breschenausfluss als Randbedingung, berechnet ein erweitertes Spektrum der Hochwasserintensitätsindikatoren wie: Überflutungstiefe, Fliessgeschwindigkeit, Impuls, Überflutungsdauer und Wasseranstiegsrate. IHAM wird im Rahmen einer Monte Carlo Simulation ausgeführt und berücksichtigt die natürliche Variabilität der Hochwasserentstehungsprozesse, die in der Form der Hydrographen und deren Häufigkeit abgebildet wird, und die Zufälligkeit des Deichversagens, gegeben durch die Lokationen der Bruchstellen, der Zeitpunkte der Brüche und der Breschenbreiten. Das Modell wurde entwickelt und getestet an einem ca. 91 km langen Flussabschnitt. Dieser Flussabschnitt ist durchgängig eingedeicht und befindet sich an der deutschen Elbe zwischen den Pegeln Torgau und Vockerode. Die Szenarioberechnungen wurden von synthetischen Hydrographen für den Hauptstrom und Nebenfluss angetrieben, die für Hochwässer mit Wiederkehrintervallen von 100, 200, 500, und 1000 Jahren entwickelt wurden. Basierend auf den Modellierungsergebnissen wurden probabilistische Deichgefährdungskarten generiert. Sie zeigen die Versagenswahrscheinlichkeiten der diskretisierten Deichabschnitte für jede modellierte Hochwassermagnitude. Die Deichgefährdungskarten im disaggregierten Darstellungsmodus zeigen die Bruchwahrscheinlichkeiten für jeden betrachteten Bruchmechanismus. Neben den binären Überflutungsmustern, die die Wahrscheinlichkeit der Überflutung jeder Rasterzelle im Hinterland zeigen, generiert IHAM probabilistische Hochwassergefährdungskarten. Diese Karten stellen räumliche Muster der in Betracht gezogenen Hochwasserintensitätsindikatoren und entsprechende Jährlichkeiten dar. Schließlich, wurden mit IHAM Szenarien mit Aktivierung vom Polder bei extremen Hochwässern mit Jährlichkeiten von 200, 500, 1000 Jahren simuliert. Das entwickelte IHAM Modellierungssystem stellt ein neues wissenschaftliches Werkzeug für die Untersuchung fluvialer Überflutungsdynamik in extremen Hochwassersituationen unter Berücksichtigung des Einflusses technischer Hochwasserschutzmaßnahmen dar. Das IHAM System hat eine hohe praktische Bedeutung für die Entscheidungsunterstützung im Hochwassermanagement aufgrund der neuartigen Deichbruch- und Hochwassergefährdungskarten, die das Hauptprodukt der Simulationen darstellen.
60

A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

Akram, Muhammad Farooq 28 March 2012 (has links)
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to-be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system, make it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.

Page generated in 0.62 seconds