• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1696
  • 419
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 25
  • 21
  • 20
  • 15
  • 8
  • 7
  • 7
  • Tagged with
  • 3608
  • 598
  • 432
  • 364
  • 359
  • 358
  • 346
  • 326
  • 326
  • 294
  • 282
  • 257
  • 214
  • 214
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Breaking Uncertainties for Product Offerings : "A Holistic Framework of Uncertainty Management for Planning, Designing and Developing PSS (Product/Service System) "

Ashok Kumar, Allan, Chau Trinh, Giang January 2011 (has links)
In the last decade, PSS (Product/ Service System) emerged as a new effective business model in helping manufacturers increase significantly productivity and customer’s satisfaction, whist minimizing environmental impact. PSS contributes drastically to the development of an innovative transaction trend, in which rather than just providing physical products separately, industrial Companies are more focusing on integrated service offers and customer’s need fulfillment.    However, to implement successfully PSS, manufacturers have to overcome many challenges and uncertainties. The uncertainties in the PSS planning phase are related to market, environment or company analysis; reliability, product/service integration, supplier coordination etc in the design and development stages are considered as potential uncertainties. Uncertainty is defined as “State of deficiency of information related to a future event” (Sakao et al., 2009). In which, risks derived from negative side of uncertainties may reduce efficiency of the model or even make the implementation process fail to some extent. If the uncertainty is resolved in a favorable way, risks can be seen as potential business opportunities for the development of PSS Companies. While many Companies already have their own uncertainty management initiative; others just utilize their long time experience to treat uncertainties. Therefore, numerous Companies are seeking a comprehensive uncertainty management framework that could be applicable in most circumstances. In order to fulfill this urgent need, our thesis aimed to develop a holistic framework in order to manage risks occurred in PSS planning, design and development stages. Based on previous valuable PSS researches and useful empirical data collected, our dissertation first determined successfully critical uncertainty factors and potential business opportunities exploited from those. In addition, the research investigated elaborately PSS product quality thresholds and producers’ perception on reliability of their products before constructing a general uncertainty management framework. In which the whole management process based on Active Risk Management philosophy, included Risk Management Planning, Risk Identification, Risk Assessment and Prioritization, Risk Quantification, Risk Response Planning, Risk Tracking and Control were introduced as a helpful guideline to support PSS Companies to treat effectively uncertainties in PSS planning, design and development.
252

Incorporating sensor uncertainty in robot map building using fuzzy boundary representation

Tovar, Alejandro 17 April 2014 (has links)
A map is important for autonomous mobile robots to traverse an environment safely and efficiently through highly competent abilities in path planning, navigation and localization. Maps are generated from sensors data. However, sensor uncertainties affect the mapping process and thus influence the performance of path planning, navigation and localization capabilities. This thesis proposes to incorporate sensor uncertainty information in robot environmental map using Fuzzy Boundary Representation (B-rep). Fuzzy B-rep map is generated by first converting measured range data into scan polygons, then combining scan polygons into resultant robot B-rep map by union operation and finally fuzzifying the B-rep map by sweeping sensor uncertainty membership function along generated B-rep map. A map of the fifth floor of E1 building is generated using the proposed method to demonstrate the alleviation in computational and memory load for robot environment mapping using Fuzzy B-rep, in contrast to the conventional grid based mapping methods.
253

Statistical approach toward designing expert system

Hu, Zhiji January 1988 (has links)
Inference under uncertainty plays a crucial role in expert system and receives growing attention from artificial intelligence experts, statisticians, and psychologists. In searching for new satisfactory ways to model inference under uncertainty, it will be necessary to combine the efforts of researchers from different areas. It is expected that with deep insight into this crucial problem, it will not only have enormous impact on development of AI and expert system, but also bring classical areas like statistics into a new stage. This research paper gives a precise synopsis of present work in the field and explores the mechanics of statistical inference to a new depth by combining efforts of computer scientists, statisticians, and psychologists. One important part of the paper is the comparison of different paradigms, including the difference between statistical and logical views. Special attentions, which need to be paid when combining various methods, are considered in the paper. Also, some examples and counterexamples will be given to illustrate the availability of individual model which describes human behavior. Finally, a new framework to deal with uncertainty is proposed, and future trends of uncertainty management are projected. / Department of Mathematical Sciences
254

Affine Arithmetic Based Methods for Power Systems Analysis Considering Intermittent Sources of Power

Munoz Guerrero, Juan Carlos January 2013 (has links)
Intermittent power sources such as wind and solar are increasingly penetrating electrical grids, mainly motivated by global warming concerns and government policies. These intermittent and non-dispatchable sources of power affect the operation and control of the power system because of the uncertainties associated with their output power. Depending on the penetration level of intermittent sources of power, the electric grid may experience considerable changes in power flows and synchronizing torques associated with system stability, because of the variability of the power injections, among several other factors. Thus, adequate and efficient techniques are required to properly analyze the system stability under such uncertainties. A variety of methods are available in the literature to perform power flow, transient, and voltage stability analyses considering uncertainties associated with electrical parameters. Some of these methods are computationally inefficient and require assumptions regarding the probability density functions (pdfs) of the uncertain variables that may be unrealistic in some cases. Thus, this thesis proposes computationally efficient Affine Arithmetic (AA)-based approaches for voltage and transient stability assessment of power systems, considering uncertainties associated with power injections due to intermittent sources of power. In the proposed AA-based methods, the estimation of the output power of the intermittent sources and their associated uncertainty are modeled as intervals, without any need for assumptions regarding pdfs. This is a more desirable characteristic when dealing with intermittent sources of power, since the pdfs of the output power depends on the planning horizon and prediction method, among several other factors. The proposed AA-based approaches take into account the correlations among variables, thus avoiding error explosions attributed to other self-validated techniques such as Interval Arithmetic (IA).
255

The Utility of Using Multiple Conceptual Models for the Design of Groundwater Remediation Systems

Sheffield, Philip January 2014 (has links)
The design of pump and treat systems for groundwater remediation is often aided by numerical groundwater modelling. Model predictions are uncertain, with this uncertainty resulting from unknown parameter values, model structure and future system forcings. Researchers have begun to suggest that uncertainty in groundwater model predictions is largely dominated by structural/conceptual model uncertainty and that multiple conceptual models be developed in order to characterize this uncertainty. As regulatory bodies begin to endorse the more expensive multiple conceptual model approach, it is useful to assess whether a multiple model approach provides a signi cant improvement over a conventional single model approach for pump and treat system design, supplemented with a factor of safety. To investigate this question, a case study located in Tacoma, Washington which was provided by Conestoga-Rovers & Associates (CRA) was used. Twelve conceptual models were developed to represent conceptual model uncertainty at the Tacoma, Washington site and a pump and treat system was optimally designed for each conceptual model. Each design was tested across all 12 conceptual models with no factor of safety applied, and a factor of safety of 1.5 and 2 applied. Adding a factor of safety of 1.5 decreased the risk of containment failure to 15 percent, compared to 21 percent with no factor of safety. Increasing the factor of safety from 1.5 to 2 further reduced the risk of containment failure to 9 percent, indicating that the application of a factor of safety reduces the risk of design failure at a cost directly proportional to the value of the factor of safety. To provide a relatively independent estimate of a factor of safety approach a single "best" model developed by CRA was compared against the multiple model approach. With a factor of safety of 1.5 or greater, adequate capture was demonstrated across all 12 conceptual models. This demonstrated that in this case using the single \best" model developed by CRA with a factor of safety would have been a reasonable surrogate for a multiple model approach. This is of practical importance to engineers as it demonstrates that the a conventional single model approach may be su cient. However, it is essential that the model used is a good model. Furthermore, a multiple model approach will likely be an excessive burden in cases such as pump and treat system design, where the cost of failure is low as the system can be adjusted during operation to respond to new data. This may not be the case for remedial systems with high capital costs such as permeable reactive barriers, which cannot be easily adjusted.
256

Climate change impact assessment and uncertainty analysis of the hydrology of a northern, data-sparse catchment using multiple hydrological models

Bohrn, Steven 17 December 2012 (has links)
The objective of this research was to determine the impact of climate change on the Churchill River basin and perform analysis on uncertainty related to this impact. Three hydrological models were used to determine this impact and were calibrated to approximately equivalent levels of efficiency. These include WATFLOODTM, a semi-physically based, distributed model; HBV-EC, a semidistributed, conceptual model; and HMETS, a lumped, conceptual model. These models achieved Nash-Sutcliffe calibration values ranging from 0.51 to 0.71. Climate change simulations indicated that the average of simulations predict a small increase in flow for the 2050s and a slight decrease for the 2080s. Each hydrological model predicted earlier freshets and a shift in timing of low flow events. Uncertainty analysis indicated that the chief contributor of uncertainty was the selection of GCM followed by hydrological model with less significant sources of uncertainty being parameterization of the hydrological model and selection of emissions scenario.
257

The development and application of a normative framework for considering uncertainty and variability in economic evaluation

Coyle, Douglas January 2004 (has links)
The focus of this thesis is in the development and application of a normative framework for handling both variability and uncertainty in making decisions using economic evaluation. The framework builds on the recent work which takes an intuitive Bayesian approach to handling uncertainty as well as adding a similar approach for the handling of variability. The technique of stratified cost effectiveness analysis is introduced as an innovative, intuitive and theoretically sound basis for consideration of variability with respect to cost effectiveness. The technique requires the identification of patient strata where there are differences between strata but individual strata are relatively homogenous. For handling uncertainty, the normative framework requires a twofold approach. First, the cost effectiveness of therapies within each patient stratum must be assessed using probabilistic analysis. Secondly, techniques for estimation of the expected value of perfect information should be applied to determine an efficient research plan for the disease of interest. For the latter, a new technique for estimating EVPI based on quadrature is described which is both accurate and allows simpler calculation of the expected value of sample information. In addition the unit normal loss integral method previously ignored as a method of estimating EVPPI is shown to be appropriate in specific circumstances. The normative framework is applied to decisions relating to the public funding of the treatment of osteoporosis in the province of Ontario. The optimal limited use criteria would be to fund treatment with alendronate for women aged 75 years and over with previous fracture and 77 years and over with no previous fracture. An efficient research plan would fund a randomised controlled trial comparing etidronate to no therapy with a sample size of 640. Certain other research studies are of lesser value. Subsequent to the analysis contained in this thesis, the province of Ontario revised there limited use criteria to be broadly in line with the conclusions of this analysis. Thus, the application of the framework to this area demonstrates both its feasibility and acceptability. The normative framework developed in this thesis provides an optimal solution for decision makers in terms of handling uncertainty and variability in economic evaluation. Further research refining methods for estimating information value and considering other forms of uncertainty within models will enhance the framework.
258

Static critical properties of the pure and diluted Heisenberg or Ising models

Davies, Mathew Raymond January 1982 (has links)
Real space renormalisation group scaling techniques are used to investigate the static critical behaviour of the pure and dilute, classical, anisotropic Heisenberg model. Transfer matrix methods are employed to obtain asymptotically exact expressions for the correlation lengths and susceptibilities of the one-dimensional system. The resulting scaling relationships are combined with an approximate bond moving scheme to treat pure and dilute models in higher dimensionalities. Detailed discussions are given for the dependence of correlation lengths and susceptibilities on temperature, anisotropy and concentration, and fcr the critical temperature on anisotropy and concentration. Particular emphasis is given to the weakly anisotropic system near percolation threshold and comparisons are made between the results of the present analysis and those of neutron-scattering experiments on dilute quasi-two- and three-dimensional systems.
259

Management of Uncertainties in Publish/Subscribe System

Liu, Haifeng 18 February 2010 (has links)
In the publish/subscribe paradigm, information providers disseminate publications to all consumers who have expressed interest by registering subscriptions. This paradigm has found wide-spread applications, ranging from selective information dissemination to network management. However, all existing publish/subscribe systems cannot capture uncertainty inherent to the information in either subscriptions or publications. In many situations the large number of data sources exhibit various kinds of uncertainties. Examples of imprecision include: exact knowledge to either specify subscriptions or publications is not available; the match between a subscription and a publication with uncertain data is approximate; the constraints used to define a match is not only content based, but also take the semantic information into consideration. All these kinds of uncertainties have not received much attention in the context of publish/subscribe systems. In this thesis, we propose new publish/subscribe models to express uncertainties and semantics in publications and subscriptions, along with the matching semantics for each model. We also develop efficient algorithms to perform filtering for our models so that it can be applied to process the rapidly increasing information on the Internet. A thorough experimental evaluation is presented to demonstrate that the proposed systems can offer scalability to large number of subscribers and high publishing rates.
260

Distributed Estimation in Sensor Networks with Modeling Uncertainty

Zhou, Qing 03 October 2013 (has links)
A major issue in distributed wireless sensor networks (WSNs) is the design of efficient distributed algorithms for network-wide dissemination of information acquired by individual sensors, where each sensor, by itself, is unable to access enough data for reliable decision making. Without a centralized fusion center, network-wide reliable inferencing can be accomplished by recovering meaningful global statistics at each sensor through iterative inter-sensor message passing. In this dissertation, we first consider the problem of distributed estimation of an unknown deterministic scalar parameter (the target signal) in a WSN, where each sensor receives a single snapshot of the field. An iterative distributed least-squares (DLS) algorithm is investigated with and without the consideration of node failures. In particular, without sensor node failures it is shown that every instantiation of the DLS algorithm converges, i.e., consensus is reached among the sensors, with the limiting agreement value being the centralized least-squares estimate. With node failures during the iterative exchange process, the convergence of the DLS algorithm is still guaranteed; however, an error exists be- tween the limiting agreement value and the centralized least-squares estimate. In order to reduce this error, a modified DLS scheme, the M-DLS, is provided. The M-DLS algorithm involves an additional weight compensation step, in which a sensor performs a one-time weight compensation procedure whenever it detects the failure of a neighbor. Through analytical arguments and simulations, it is shown that the M-DLS algorithm leads to a smaller error than the DLS algorithm, where the magnitude of the improvement dependents on the network topology. We then investigate the case when the observation or sensing mode is only partially known at the corresponding nodes, perhaps, due to their limited sensing capabilities or other unpredictable physical factors. Specifically, it is assumed that the observation validity at a node switches stochastically between two modes, with mode I corresponding to the desired signal plus noise observation mode (a valid observation), and mode II corresponding to pure noise with no signal information (an invalid observation). With no prior information on the local sensing modes (valid or invalid), we introduce a learning-based distributed estimation procedure, the mixed detection-estimation (MDE) algorithm, based on closed-loop interactions between the iterative distributed mode learning and the target estimation. The online learning (or sensing mode detection) step re-assesses the validity of the local observations at each iteration, thus refining the ongoing estimation update process. The convergence of the MDE algorithm is established analytically, and the asymptotic performance analysis studies shows that, in the high signal-to-noise ratio (SNR) regime, the MDE estimation error converges to that of an ideal (centralized) estimator with perfect information about the node sensing modes. This is in contrast with the estimation performance of a naive average consensus based distributed estimator (with no mode learning), whose estimation error blows up with an increasing SNR.

Page generated in 0.06 seconds