• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1715
  • 420
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 26
  • 21
  • 20
  • 15
  • 10
  • 8
  • 7
  • Tagged with
  • 3634
  • 601
  • 435
  • 368
  • 360
  • 359
  • 349
  • 329
  • 328
  • 298
  • 283
  • 265
  • 215
  • 214
  • 213
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Application of Modular Uncertainty Techniques to Engineering Systems

Long, William C 04 May 2018 (has links)
Uncertainty analysis is crucial to any thorough analysis of an engineering system. Traditional uncertainty analysis can be a tedious task involving numerous steps that can be error prone if conducted by hand. If conducted with the aid of a computer, these tasks can be computationally expensive. In either case, the process is quite rigid. If a parameter of the system is modified or the system configuration is changed, the entire uncertainty analysis process must be conducted again giving more opportunities for calculation errors or computation time. Modular uncertainty analysis provides a method to overcome all these obstacles of traditional uncertainty analysis. The modular technique is well suited for computation by a computer which makes the process somewhat automatic after the initial setup and computation errors are reduced. The modular technique implements matrix operations to conduct the analysis. This in turns makes the process more efficient than traditional methods because computers are well suited for matrix operations. Since the modular technique implements matrix operations, the method is adaptable to system parameter or configuration modifications. The modular technique also lends itself to quickly calculating other uncertainty analysis parameters such as the uncertainty magnification factor, and the uncertainty percent contribution. This dissertation will focuson the modular technique, the extension of the technique in the form the uncertainty magnification factor and uncertainty percent contribution, and the application of the modular technique to different type of energy systems. The modular technique is applied to an internal combustion engine with a bottoming organic Rankine cycle system, a combined heat and power system, and a heating, ventilation, and air conditioning system. The results show that the modular technique is well suited to evaluate complex engineering systems. The modular technique is also shown to perform well when system parameters or configurations are modified.
402

Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification

Zavar Moosavi, Azam Sadat 13 March 2018 (has links)
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes. / Ph. D. / Computational models are used to understand the behavior of the natural phenomenon. Models are used to approximate the evolution of the true phenomenon or reality in time. We obtain more accurate forecast for the future by combining the model approximation together with the observation from reality. Weather forecast models, oceanography, geoscience, etc. are some examples of the forecasting models. However, models can only approximate the true reality to some extent and model approximation of reality is not perfect due to several sources of error or uncertainty. The noise in measurements or in observations from nature, the uncertainty in some model components, some missing components in models, the interaction between different components of the model, all cause model forecast to be different from reality. The aim of this study is to explore the techniques and approaches of modeling the error and uncertainty of computational models, provide solution and remedies to reduce the error of model forecast and ultimately improve the model forecast. Taking the discrepancy or error between model forecast and reality in time and mining that error provide valuable information about the origin of uncertainty in models as well as the hidden dynamics that is not considered in the model. Statistical and machine learning based solutions are proposed in this study to identify the source of uncertainty, capturing the uncertainty and using that information to reduce the error and enhancing the model forecast. We studied the error modeling, error or uncertainty quantification and reduction techniques in several frameworks from chemical models to weather forecast models. In each of the models, we tried to provide proper solution to detect the origin of uncertainty, model the error and reduce the uncertainty to improve the model forecast.
403

Power Electronics Design Methodologies with Parametric and Model-Form Uncertainty Quantification

Rashidi Mehrabadi, Niloofar 27 April 2018 (has links)
Modeling and simulation have become fully ingrained into the set of design and development tools that are broadly used in the field of power electronics. To state simply, they represent the fastest and safest way to study a circuit or system, thus aiding in the research, design, diagnosis, and debugging phases of power converter development. Advances in computing technologies have also enabled the ability to conduct reliability and production yield analyses to ensure that the system performance can meet given requirements despite the presence of inevitable manufacturing variability and variations in the operating conditions. However, the trustworthiness of all the model-based design techniques depends entirely on the accuracy of the simulation models used, which, thus far, has not yet been fully considered. Prior to this research, heuristic safety factors were used to compensate for deviation of real system performance from the predictions made using modeling and simulation. This approach resulted invariably in a more conservative design process. In this research, a modeling and design approach with parametric and model-form uncertainty quantification is formulated to bridge the modeling and simulation accuracy and reliance gaps that have hindered the full exploitation of model-based design techniques. Prior to this research, a few design approaches were developed to account for variability in the design process; these approaches have not shown the capability to be applicable to complex systems. This research, however, demonstrates that the implementation of the proposed modeling approach is able to handle complex power converters and systems. A systematic study for developing a simplified test bed for uncertainty quantification analysis is introduced accordingly. For illustrative purposes, the proposed modeling approach is applied to the switching model of a modular multilevel converter to improve the existing modeling practice and validate the model used in the design of this large-scale power converter. The proposed modeling and design methodology is also extended to design optimization, where a robust multi-objective design and optimization approach with parametric and model form uncertainty quantification is proposed. A sensitivity index is defined accordingly as a quantitative measure of system design robustness, with regards to manufacturing variability and modeling inaccuracies in the design of systems with multiple performance functions. The optimum design solution is realized by exploring the Pareto Front of the enhanced performance space, where the model-form error associated with each design is used to modify the estimated performance measures. The parametric sensitivity of each design point is also considered to discern between cases and help identify the most parametrically-robust of the Pareto-optimal design solutions. To demonstrate the benefits of incorporating uncertainty quantification analysis into the design optimization from a more practical standpoint, a Vienna-type rectifier is used as a case study to compare the theoretical analysis with a comprehensive experimental validation. This research shows that the model-form error and sensitivity of each design point can potentially change the performance space and the resultant Pareto Front. As a result, ignoring these main sources of uncertainty in the design will result in incorrect decision-making and the choice of a design that is not an optimum design solution in practice. / Ph. D. / Modeling and simulation have become fully ingrained into the set of design and development tools that are broadly used in the field of power electronics. To state simply, they represent the fastest and safest way to study a circuit or system, thus aiding in the research, design, diagnosis, and debugging phases of power converter development. Advances in computing technologies have also enabled the ability to conduct reliability and production yield analyses to ensure that the system performance can meet given requirements despite the presence of inevitable manufacturing variability and variations in the operating conditions. However, the trustworthiness of all the model-based design techniques depends entirely on the accuracy of the simulation models used, which has not yet been fully considered. In this research, a modeling and design approach with parametric and model-form uncertainty quantification is formulated to bridge the modeling and simulation accuracy and reliance gaps that have hindered the full exploitation of model-based design techniques. The proposed modeling and design methodology is also extended to design optimization, where a robust multi-objective design and optimization approach with parametric and model-form uncertainty quantification is proposed. A sensitivity index is defined accordingly as a quantitative measure of system design robustness, with regards to manufacturing variability and modeling inaccuracy in the design of systems with multiple performance functions. This research shows that the model-form error and sensitivity of each design point can potentially change the performance space and resultant Pareto Front. As a result, ignoring these main sources of uncertainty in the design will result in incorrect decision making and the choice of a design that is not an optimum design solution in practice.
404

Quality Assessment of Spatial Data: Positional Uncertainties of the National Shoreline Data of Sweden

Hast, Isak January 2014 (has links)
This study investigates on the planimetric (x, y) positional accuracy of the National Shoreline (NSL) data, produced in collaboration between the Swedish mapping agency Lantmäteriet and the Swedish Maritime Administration (SMA). Due to the compound nature of shorelines, such data is afflicted by substantial positional uncertainties. In contrast, the positional accuracy requirements of NSL data are high. An apparent problem is that Lantmäteriet do not measure the positional accuracy of NSL in accordance to the NSL data product specification. In addition, currently, there is little understanding of the latent positional changes of shorelines affected by the component of time, in direct influence of the accuracy of NSL. Therefore, in accordance to the two specific aims of this study, first, an accuracy assessment technique is applied so that to measure the positional accuracy of NSL. Second, positional time changes of NSL are analysed. This study provides with an overview of potential problems and future prospects of NSL, which can be used by Lantmäteriet to improve the quality assurance of the data. Two line-based NSL data sets within the NSL classified regions of Sweden are selected. Positional uncertainties of the selected NSL areas are investigated using two distinctive methodologies. First, an accuracy assessment method is applied and accuracy metrics by the root-means-square error (RMSE) are derived. The accuracy metrics are checked toward specification and standard accuracy tolerances. Results of the assessment by the calculated RMSE metrics in comparison to tolerances indicate on an approved accuracy of tested data. Second, positional changes of NSL data are measured using a proposed space-time analysis technique. The results of the analysis reveal significant discrepancies between the two areas investigated, indicating that one of the test areas are influenced by much greater positional changes over time. The accuracy assessment method used in this study has a number of apparent constraints. One manifested restriction is the potential presence of bias in the derived accuracy metrics. In mind of current restrictions, the method to be preferred in assessment of the positional accuracy of NSL is a visual inspection towards aerial photographs. In regard of the result of the space-time analysis, one important conclusion can be made. Time-dependent positional discrepancies between the two areas investigated, indicate that Swedish coastlines are affected by divergent degrees of positional changes over time. Therefore, Lantmäteriet should consider updating NSL data at different time phases dependent on the prevailing regional changes so that to assure the currently specified positional accuracy of the entire data structure of NSL.
405

A Lean Six Sigma framework to enhance the competitiveness in selected automotive component manufacturing organisations

Rathilall, Raveen 14 January 2015 (has links)
Submitted in fulfilment of the requirements of the Degree Doctor of Technology: Quality, Durban University of Technology. 2014. / The South African automotive sector is often plagued with complex and competitive business challenges owing to globalisation, economic uncertainty and fluctuating market demands. These challenges prompt business leaders in South Africa to improve their operations and to enhance innovations in processes, products and services in a very reactive manner. Literature shows that one such initiative that can assist the automotive sector to compete with the rest of the world where productivity, quality and operational costs reduction are crucial for economic success is the adoption of the integrated Lean Six Sigma tool. The automotive sector, which purports to be at the forefront of best industry manufacturing practices in South Africa, is certainly lacking in this area. The purpose of this thesis was to assess Lean and Six Sigma techniques as standalone systems, the integration of Lean and Six Sigma as a unified approach to continuous improvement and to develop a proposed Lean Six Sigma framework for the automotive component manufacturing organisations in KwaZulu-Natal (KZN), South Africa. Due to the nature and complexity of this project, it was decided to adopt the action-based research strategy and include both qualitative and quantitative techniques. Two hypotheses were formulated to guide the research. The study was confined to the greater Durban region in KZN, which formed the target population of forty two organisations within the Durban Automotive Cluster (DAC). A survey questionnaire was designed in measurable format to gather practical information from the sample organisations on the status of their existing business improvement programs and quality practices. This information was necessary to critique the sample organisations for Lean and Six Sigma requirements and compare it to the literature in terms of the KZN context. A pilot study was conducted with senior management at five automotive manufacturing organisations to determine if the participants encountered any problems in answering the questionnaire and if the methodology adopted would meet the objectives of this project. The results of the pilot study indicated high reliability scores which were sustainable for the main study. The survey questionnaire was reviewed by Lean and Six Sigma Experts, Academics and members of the DAC executive team to ensure the validity of the questionnaire to the KZN context. The logistics of the main study followed a similar format as the pilot study and the questionnaires were distributed within the DAC over a three month period. A census sample was used in the field study to collect primary data. A response rate of 75% was achieved. The results of the empirical findings revealed that the sample organisations had a very low success rate of Lean and Six Sigma adoption as standalone systems. The sample organisations only practiced certain Lean and Six Sigma tools and techniques as they found it difficult to maintain the complete transition from theory to practice. The synergies that emerged from the study of Lean and Six Sigma that affect manufacturing performance suggested that they complemented and supported each other by tailoring the deficiencies to the given environment. This information was translated into practical considerations for constructing the proposed Lean Six Sigma framework from a KZN perspective. The conclusion of the main study was that if an organisation wants improvement to happen on an ongoing basis, it needs to recognise that there are significant interactions between their management system and the improvement technique. When the organisations understand the characteristics of the environment in which they operate, they will be able to configure appropriate follow up processes to sustain their management systems. The study demonstrated that Lean Six Sigma integration repackages the stronger focus areas of Lean and Six Sigma to create its own unique approach on improving an organisation’s performance. It is anticipated that organisations which implement the proposed Lean Six Sigma framework could contribute significantly to the growth of the South African economy in terms of increased productivity, improved international competition and job creation. The value of this research is that the proposed Lean Six Sigma framework affords the KZN automotive sector a unique opportunity to create its own brand of quality that compliments its management style and industry demands. Future research should focus on testing the applicability of the proposed Lean Six Sigma framework in a real case scenario to ensure that the critical outcomes are adequately ingrained to achieve perceived organisational performance. Lastly, it is recommended that a list of performance evaluators is developed and follow up procedures to monitor the progress of the Lean Six Sigma technique is implemented.
406

Relative-fuzzy : a novel approach for handling complex ambiguity for software engineering of data mining models

Imam, Ayad Tareq January 2010 (has links)
There are two main defined classes of uncertainty namely: fuzziness and ambiguity, where ambiguity is ‘one-to-many’ relationship between syntax and semantic of a proposition. This definition seems that it ignores ‘many-to-many’ relationship ambiguity type of uncertainty. In this thesis, we shall use complex-uncertainty to term many-to-many relationship ambiguity type of uncertainty. This research proposes a new approach for handling the complex ambiguity type of uncertainty that may exist in data, for software engineering of predictive Data Mining (DM) classification models. The proposed approach is based on Relative-Fuzzy Logic (RFL), a novel type of fuzzy logic. RFL defines a new formulation of the problem of ambiguity type of uncertainty in terms of States Of Proposition (SOP). RFL describes its membership (semantic) value by using the new definition of Domain of Proposition (DOP), which is based on the relativity principle as defined by possible-worlds logic. To achieve the goal of proposing RFL, a question is needed to be answered, which is: how these two approaches; i.e. fuzzy logic and possible-world, can be mixed to produce a new membership value set (and later logic) that able to handle fuzziness and multiple viewpoints at the same time? Achieving such goal comes via providing possible world logic the ability to quantifying multiple viewpoints and also model fuzziness in each of these multiple viewpoints and expressing that in a new set of membership value. Furthermore, a new architecture of Hierarchical Neural Network (HNN) called ML/RFL-Based Net has been developed in this research, along with a new learning algorithm and new recalling algorithm. The architecture, learning algorithm and recalling algorithm of ML/RFL-Based Net follow the principles of RFL. This new type of HNN is considered to be a RFL computation machine. The ability of the Relative Fuzzy-based DM prediction model to tackle the problem of complex ambiguity type of uncertainty has been tested. Special-purpose Integrated Development Environment (IDE) software, which generates a DM prediction model for speech recognition, has been developed in this research too, which is called RFL4ASR. This special purpose IDE is an extension of the definition of the traditional IDE. Using multiple sets of TIMIT speech data, the prediction model of type ML/RFL-Based Net has classification accuracy of 69.2308%. This accuracy is higher than the best achievements of WEKA data mining machines given the same speech data.
407

A COMPARISON OF METHODS FOR MEASUREMENT OF PRESSURE IN HYDRAULIC LINES

Sprague, Susan, Chorney, Andrew 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / This presentation summarizes a study characterizing strain gages and pressure transducers used to measure the fluid pressure within aircraft hydraulic lines. A series of laboratory calibrations and finite element analyses was performed to demonstrate the quality of data from both pressure transducers and strain gages under variations in both temperature and external strains on the hydraulic lines. Strain gages showed a marked susceptibility to external strains on hydraulic lines, and wide variations in susceptibility to temperature changes. Pressure transducers were found to be relatively immune to both conditions. It is recommended that strain gages be used for trend data only.
408

Topics in portfolio choice : qualitative properties, time consistency and investment under model uncertainty

Kallblad, Sigrid Linnea January 2014 (has links)
The study of expected utility maximization in continuous-time stochastic market models dates back to the seminal work of Merton 1969 and has since been central to the area of Mathematical Finance. The associated stochastic optimization problems have been extensively studied. The problem formulation relies on two strong underlying assumptions: the ability to specify the underpinning market model and the knowledge of the investor's risk preferences. However, neither of these inputs is easily available, if at all. Resulting issues have attracted continuous attention and prompted very active and diverse lines of research. This thesis seeks to contribute towards this literature and questions related to both of the above issues are studied. Specifically, we study the implications of certain qualitative properties of the utility function; we introduce, and study various aspects of, the notion of robust forward investment criteria; and we study the investment problem associated with risk- and ambiguity-averse preference criteria defined in terms of quasiconcave utility functionals.
409

Reducing Cognitive Load Using Adaptive Uncertainty Visualization

Block, Gregory 01 January 2013 (has links)
Uncertainty is inherent in many real-world settings; for example, in a combat situation, darkness may prevent a soldier from classifying approaching troops as friendly or hostile. In an environment plagued with uncertainty, decision-support systems, such as sensor-based networks, may make faulty assumptions about field conditions, especially when information is incomplete, or sensor operations are disrupted. Displaying the factors that contribute to uncertainty informs the decision-making process for a human operator, but at the expense of limited cognitive resources, such as attention, memory, and workload. This research applied principles of perceptual cognition to human-computer interface design to introduce uncertainty visualizations in an adaptive approach that improved the operator's decision-making process, without unduly burdening the operator's cognitive load. An adaptive approach to uncertainty visualization considers the cognitive burden of all visualizations, and reduces the visualizations according to relevancy as the user's cognitive load increases. Experiments were performed using 24 volunteer participants using a simulated environment that featured both intrinsic load, and characteristics of uncertainty. The experiments conclusively demonstrated that adaptive uncertainty visualization reduced the cognitive burden on the operator's attention, memory, and workload, resulting in increased accuracy rates, faster response times, and a higher degree of user satisfaction. This research adds to the body of knowledge regarding the use of uncertainty visualization in the context of cognitive load. Existing research has not identified techniques to support uncertainty visualization, without further burdening cognitive load. This research identified principles, such as goal-oriented visualization, and salience, which promote the use of uncertainty visualization for improved decision-making without increasing cognitive load. This research has extensive significance in fields where both uncertainty and cognitive load factors can reduce the effectiveness of decision-makers, such as sensor-based systems used in the military, or in first-responder situations.
410

Greenhouse gas emissions from contrasting beef production systems

Ricci, Patricia January 2014 (has links)
Agriculture has been reported to contribute a significant amount of greenhouse gases to the atmosphere among other anthropogenic activities. With still more than 870 million people in the world suffering from under-nutrition and a growing global food demand, it is relevant to study ways for mitigating the environmental impact of food production. The objective of this work was to identify gaps in the knowledge regarding the main factors affecting greenhouse gas (GHG) emissions from beef farming systems, to reduce the uncertainty on carbon footprint predictions, and to study the relative importance of mitigation options at the system level. A lack of information in the literature was identified regarding the quantification of the relevant animal characteristics of extensive beef systems that can impact on methane (CH4) outputs. In a meta-analysis study, it was observed that the combination of physiological stage and type of diet improved the accuracy of CH4 emission rate predictions. Furthermore, when applied to a system analysis, improved equations to predict CH4 from ruminants under different physiological stages and diet types reduced the uncertainty of whole-farm enteric CH4 predictions by up to 7% over a year. In a modelling study, it was demonstrated that variations in grazing behaviour and grazing choice have a potentially large impact upon CH4 emissions, which are not normally mentioned within carbon budget calculations at either local or national scale. Methane estimations were highly sensitive to changes in quality of the diet, highlighting the importance of considering animal selectivity on carbon budgets of heterogeneous grasslands. Part of the difficulties on collecting reliable information from grazing cattle is due to some limitations of available techniques to perform CH4 emission measurements. Thus, the potential use of a Laser Methane Detector (LMD) for remote sensing of CH4 emissions from ruminants was evaluated. A data analysis method was developed for the LMD outputs. The use of a novel technique to assess CH4 production from ruminants showed very good correlations with independent measurements in respiration chambers. Moreover, the use of this highly sensitive technique demonstrates that there is more variability associated with the pattern of CH4 emissions which cannot be explained by the feed nutritional value. Lastly, previous findings were included in a deterministic model to simulate alternative management options applied to upland beef farming systems. The success of the suggested management technologies to mitigate GHG emissions depends on the characteristics of the farms and management previously adopted. Systems with high proportion of their land unsuitable for cropping but with an efficient use of land had low and more certain GHG emissions, high human-edible returns, and small opportunities to further reduce their carbon footprint per unit of product without affecting food production, potential biodiversity conservation and the livelihood of the region. Altogether, this work helps to reduce the uncertainty of GHG predictions from beef farming systems and highlights the essential role of studies with a holistic approach to issues related to climate change that encompass the analysis of a large range of situations and management alternatives.

Page generated in 0.0291 seconds