• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 8
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 51
  • 11
  • 11
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analysing uncertainty and delays in aircraft heavy maintenance

Salazar Rosales, Leandro Julian January 2016 (has links)
This study investigates the influence of unscheduled maintenance activities on delays and disruptions during the execution of aircraft heavy maintenance services by developing a simulation model based on Systems Dynamics (SD) and supported by an Evidential Reasoning (ER) rule model. The SD model studies the complex interrelationship between scheduled and unscheduled tasks and its impact on delays during a maintenance service execution. It was found that the uncertain nature of the unscheduled maintenance tasks hinders the planning, control and allocation of resources, increasing the chances to miss deadlines and incur in cost overruns. Utilising causal loop diagrams and SD simulation the research explored the relevance that the resource allocation management, the precise estimation of the unscheduled tasks and their prompt identification have on the maintenance check duration. The influence that delays and attitudes in the decision-making process have on project performance was also investigated. The ER rule model investigates the uncertainty present during the execution of a maintenance check by providing a belief distribution of the expected unscheduled maintenance tasks. Through a non-parametric discretisation process, it was found that the size and array of distribution intervals play a key role in the model estimation accuracy. Additionally, a sensitivity analysis allowed the examination of the significance that the weight, reliability and dependence of the different pieces of evidence have on model performance. By analysing and combining historical data, the ER rule model provides a more realistic and accurate prediction to analyse variability and ambiguity. This research extends SD capabilities by incorporating the ER rule for analysing system uncertainty. By using the belief distributions provided by the ER model, the SD model can simulate the variability of the process given certain pieces of evidence. This study contributes to the existing knowledge in aircraft maintenance management by analysing, from a different perspective, the impact of uncertain unscheduled maintenance activities on delays and disruptions through an integrated approach using SD and the ER rule. Despite the fact that this research focuses on studying a particular problem in the airline industry, the findings and conclusions obtained could be used to understand and address problems embodying similar characteristics. Therefore, it can be argued that, due to the close similarities between the heavy maintenance process and complex projects, these contributions can be extended to the Project Management field.
22

Evidence majetku obcí / Record keeping of the municipal property

Schmiederová, Kristina January 2010 (has links)
The thesis deals with the issues of record keeping of the municipal property with regard to their development. The main goal is an identification of obtaining information resources for property passportization from time and financial point of view. The property passportization is defined as multifunctional record keeping and the thesis contains a proposal based on experience (for real estate -- on the basis of real situation in municipalities). The draft of the evidential card is created in Microsoft Excel.
23

An online belief rule-based group clinical decision support system

Kong, Guilan January 2011 (has links)
Around ten percent of patients admitted to National Health Service (NHS) hospitals have experienced a patient safety incident, and an important reason for the high rate of patient safety incidents is medical errors. Research shows that appropriate increase in the use of clinical decision support systems (CDSSs) could help to reduce medical errors and result in substantial improvement in patient safety. However several barriers continue to impede the effective implementation of CDSSs in clinical settings, among which representation of and reasoning about medical knowledge particularly under uncertainty are areas that require refined methodologies and techniques. Particularly, the knowledge base in a CDSS needs to be updated automatically based on accumulated clinical cases to provide evidence-based clinical decision support. In the research, we employed the recently developed belief Rule-base Inference Methodology using the Evidential Reasoning approach (RIMER) for design and development of an online belief rule-based group CDSS prototype. In the system, belief rule base (BRB) was used to model uncertain clinical domain knowledge, the evidential reasoning (ER) approach was employed to build inference engine, a BRB training module was developed for learning the BRB through accumulated clinical cases, and an online discussion forum together with an ER-based group preferences aggregation tool were developed for providing online clinical group decision support.We used a set of simulated patients in cardiac chest pain provided by our research collaborators in Manchester Royal Infirmary to validate the developed online belief rule-based CDSS prototype. The results show that the prototype can provide reliable diagnosis recommendations and the diagnostic performance of the system can be improved significantly after training BRB using accumulated clinical cases.
24

Decision Support System (DSS) for construction project risk analysis and evaluation via evidential reasoning (ER)

Taroun, Abdulmaten January 2012 (has links)
This research explores the theory and practice of risk assessment and project evaluationand proposes novel alternatives. Reviewing literature revealed a continuous endeavourfor better project risk modelling and analysis. A number of proposals for improving theprevailing Probability-Impact (P-I) risk model can be found in literature. Moreover,researchers have investigated the feasibility of different theories in analysing projectrisk. Furthermore, various decision support systems (DSSs) are available for aidingpractitioners in risk assessment and decision making. Unfortunately, they are sufferingfrom a low take-up. Instead, personal judgment and past experience are mainly used foranalysing risk and making decisions.In this research, a new risk model is proposed through extending the P-I risk model toinclude a third dimension: probability of impact materialisation. Such an extensionreflects the characteristics of a risk, its surrounding environment and the ability ofmitigating its impact. A new assessment methodology is devised. Dempster-ShaferTheory of Evidence (DST) is researched and presented as a novel alternative toProbability Theory (PT) and Fuzzy Sets Theory (FST) which dominate the literature ofproject risks analysis. A DST-based assessment methodology was developed forstructuring the personal experience and professional judgment of risk analysts andutilising them for risk analysis. Benefiting from the unique features of the EvidentialReasoning (ER) approach, the proposed methodology enables analysts to express theirevaluations in distributed forms, so that they can provide degrees of belief in apredefined set of assessment grades based on available information. This is a veryeffective way for tackling the problem of lack of information which is an inherentfeature of most projects during the tendering stage. It is the first time that such anapproach is ever used for handling construction risk assessment. Monetary equivalent isused as a common scale for measuring risk impact on various project success objectives,and the evidential reasoning (ER) algorithm is used as an assessment aggregation toolinstead of the simple averaging procedure which might not be appropriate in allsituations. A DST-based project evaluation framework was developed using projectrisks and benefits as evaluation attributes. Monetary equivalent was used also as acommon scale for measuring project risks and benefits and the ER algorithm as anaggregation tool.The viability of the proposed risk model, assessment methodology and projectevaluation framework was investigated through conducting interviews with constructionprofessionals and administering postal and online questionnaires. A decision supportsystem (DSS) was devised to facilitate the proposed approaches and to perform therequired calculations. The DSS was developed in light of the research findingsregarding the reasons of low take-up of the existing tools. Four validation case studieswere conducted. Senior managers in separate British construction companies tested thetool and found it useful, helpful and easy to use.It is concluded that the proposed risk model, risk assessment methodology and projectevaluation framework could be viable alternatives to the existing ones. Professionalexperience was modelled and utilised systematically for risk and benefit analysis. Thismay help closing the gap between theory and practice of risk analysis and decisionmaking in construction. The research findings recommend further exploration of thepotential applications of DST and ER in construction management domain.
25

Necessary Evil or Unnecessary God?

January 2018 (has links)
abstract: In this thesis, I discuss the philosophical problem of evil and, as a response, John Hick's soul making theodicy. First, I discuss the transformation of the problem. I examine how the problem has shifted from logical to evidential in recent history. Next, I offer a faithful rendition of Hick's position - one which states the existence of evil does not provide evidence against the existence of God. After reconstructing his argument, I go on to exposes its logical faults. I present four main contentions to Hick's theodicy. First, I analyze the psychology of dehumanization to question whether we have any evidence that soul making is happening in response to the suffering in the world. Second, I argue that Hick's theodicy is self-defeating if accepted because it undermines the central point on which his argument depends. Third, I claim that Hick's theodicy is self-defeating given his eschatological views. Finally, I discuss how Hick's theodicy does not account for the animal suffering that widely exists in the world now, and that exists in our evolutionary history. My hope is to show that Hick's theodicy fails to solve the problem of evil. I claim that the amount of gratuitous suffering in the world does provide evidence against the existence of God. / Dissertation/Thesis / Masters Thesis Philosophy 2018
26

Hominis Presumptions and Evidential Inferences / Las presunciones hominis y las inferencias probatorias

Aguiló Regla, Josep 10 April 2018 (has links)
The author challenges the terminology «legal presumptions» and «judicial presumptions», and rather refers to presumptions established by rules of presumption and to hominis presumptions. He argues that the best way to differentiate between them is by showing the contrast between «it shall be presumed» (syntagm proper to practical reasoning) and «it is presumable» (syntagm proper to theoretical reasoning). The text clarifies the relationship between the so-called hominis presumptions and the factual inferences or evidential inferences, in general. He answers the question of what the «it is presumed» syntagm (proper to the hominis presumptions) brings with respect to the «it is probable» syntagm (proper of all evidentiary inferences). / El autor cuestiona la terminología «presunciones legales» y «presunciones judiciales» y, más bien, se refiere a las presunciones establecidas por normas de presunción y a las presunciones hominis. Defiende que la mejor manera de diferenciar unas de otras es mostrando la distancia que media entre «debe presumirse» (sintagma propio del razonamiento práctico) y «es presumible» (sintagma propio del razonamiento teórico). El texto aclara las relaciones entre las llamadas presunciones hominis y las inferencias fácticas o inferencias probatorias, en general, respondiendo a la pregunta sobre qué aporta el sintagma «es presumible» (propio de las presunciones hominis) frente al sintagma «es probable» (propio de todas las inferencias probatorias).
27

Semantic Decision Support for Information Fusion Applications / Aide à la décision sémantique pour la diffusion d'informations

Bellenger, Amandine 03 June 2013 (has links)
La thèse s'inscrit dans le domaine de la représentation des connaissances et la modélisation de l'incertitude dans un contexte de fusion d'informations. L'idée majeure est d'utiliser les outils sémantiques que sont les ontologies, non seulement pour représenter les connaissances générales du domaine et les observations, mais aussi pour représenter les incertitudes que les sources introduisent dans leurs observations. Nous proposons de représenter ces incertitudes au travers d'une méta-ontologie (DS-ontology) fondée sur la théorie des fonctions de croyance. La contribution de ce travail porte sur la définition d'opérateurs d'inclusion et d'intersection sémantique et sur lesquels s'appuie la mise en œuvre de la théorie des fonctions de croyance, et sur le développement d'un outil appelé FusionLab permettant la fusion d'informations sémantiques à partir du développement théorique précédent. Une application de ces travaux a été réalisée dans le cadre d'un projet de surveillance maritime. / This thesis is part of the knowledge representation domain and modeling of uncertainty in a context of information fusion. The main idea is to use semantic tools and more specifically ontologies, not only to represent the general domain knowledge and observations, but also to represent the uncertainty that sources may introduce in their own observations. We propose to represent these uncertainties and semantic imprecision trough a metaontology (called DS-Ontology) based on the theory of belief functions. The contribution of this work focuses first on the definition of semantic inclusion and intersection operators for ontologies and on which relies the implementation of the theory of belief functions, and secondly on the development of a tool called FusionLab for merging semantic information within ontologies from the previous theorical development. These works have been applied within a European maritime surveillance project.
28

Toward Error-Statistical Principles of Evidence in Statistical Inference

Jinn, Nicole Mee-Hyaang 02 June 2014 (has links)
The context for this research is statistical inference, the process of making predictions or inferences about a population from observation and analyses of a sample. In this context, many researchers want to grasp what inferences can be made that are valid, in the sense of being able to uphold or justify by argument or evidence. Another pressing question among users of statistical methods is: how can spurious relationships be distinguished from genuine ones? Underlying both of these issues is the concept of evidence. In response to these (and similar) questions, two questions I work on in this essay are: (1) what is a genuine principle of evidence? and (2) do error probabilities have more than a long-run role? Concisely, I propose that felicitous genuine principles of evidence should provide concrete guidelines on precisely how to examine error probabilities, with respect to a test's aptitude for unmasking pertinent errors, which leads to establishing sound interpretations of results from statistical techniques. The starting point for my definition of genuine principles of evidence is Allan Birnbaum's confidence concept, an attempt to control misleading interpretations. However, Birnbaum's confidence concept is inadequate for interpreting statistical evidence, because using only pre-data error probabilities would not pick up on a test's ability to detect a discrepancy of interest (e.g., "even if the discrepancy exists" with respect to the actual outcome. Instead, I argue that Deborah Mayo's severity assessment is the most suitable characterization of evidence based on my definition of genuine principles of evidence. / Master of Arts
29

Uncertainty Estimation on Natural Language Processing

He, Jianfeng 15 May 2024 (has links)
Text plays a pivotal role in our daily lives, encompassing various forms such as social media posts, news articles, books, reports, and more. Consequently, Natural Language Processing (NLP) has garnered widespread attention. This technology empowers us to undertake tasks like text classification, entity recognition, and even crafting responses within a dialogue context. However, despite the expansive utility of NLP, it frequently necessitates a critical decision: whether to place trust in a model's predictions. To illustrate, consider a state-of-the-art (SOTA) model entrusted with diagnosing a disease or assessing the veracity of a rumor. An incorrect prediction in such scenarios can have dire consequences, impacting individuals' health or tarnishing their reputation. Consequently, it becomes imperative to establish a reliable method for evaluating the reliability of an NLP model's predictions, which is our focus-uncertainty estimation on NLP. Though many works have researched uncertainty estimation or NLP, the combination of these two domains is rare. This is because most NLP research emphasizes model prediction performance but tends to overlook the reliability of NLP model predictions. Additionally, current uncertainty estimation models may not be suitable for NLP due to the unique characteristics of NLP tasks, such as the need for more fine-grained information in named entity recognition. Therefore, this dissertation proposes novel uncertainty estimation methods for different NLP tasks by considering the NLP task's distinct characteristics. The NLP tasks are categorized into natural language understanding (NLU) and natural language generation (NLG, such as text summarization). Among the NLU tasks, the understanding could be on two views, global-view (e.g. text classification at document level) and local-view (e.g. natural language inference at sentence level and named entity recognition at token level). As a result, we research uncertainty estimation on three tasks: text classification, named entity recognition, and text summarization. Besides, because few-shot text classification has captured much attention recently, we also research the uncertainty estimation on few-shot text classification. For the first topic, uncertainty estimation on text classification, few uncertainty models focus on improving the performance of text classification where human resources are involved. In response to this gap, our research focuses on enhancing the accuracy of uncertainty scores by bolstering the confidence associated with winning scores. we introduce MSD, a novel model comprising three distinct components: 'mix-up,' 'self-ensembling,' and 'distinctiveness score.' The primary objective of MSD is to refine the accuracy of uncertainty scores by mitigating the issue of overconfidence in winning scores while simultaneously considering various categories of uncertainty. seamlessly integrate with different Deep Neural Networks. Extensive experiments with ablation settings are conducted on four real-world datasets, resulting in consistently competitive improvements. Our second topic focuses on uncertainty estimation on few-shot text classification (UEFTC), which has few or even only one available support sample for each class. UEFTC represents an underexplored research domain where, due to limited data samples, a UEFTC model predicts an uncertainty score to assess the likelihood of classification errors. However, traditional uncertainty estimation models in text classification are ill-suited for UEFTC since they demand extensive training data, while UEFTC operates in a few-shot scenario, typically providing just a few support samples, or even just one, per class. To tackle this challenge, we introduce Contrastive Learning from Uncertainty Relations (CLUR) as a solution tailored for UEFTC. CLUR exhibits the unique capability to be effectively trained with only one support sample per class, aided by pseudo uncertainty scores. A distinguishing feature of CLUR is its autonomous learning of these pseudo uncertainty scores, in contrast to previous approaches that relied on manual specification. Our investigation of CLUR encompasses four model structures, allowing us to evaluate the performance of three commonly employed contrastive learning components in the context of UEFTC. Our findings highlight the effectiveness of two of these components. Our third topic focuses on uncertainty estimation on sequential labeling. Sequential labeling involves the task of assigning labels to individual tokens in a sequence, exemplified by Named Entity Recognition (NER). Despite significant advancements in enhancing NER performance in prior research, the realm of uncertainty estimation for NER (UE-NER) remains relatively uncharted but is of paramount importance. This topic focuses on UE-NER, seeking to gauge uncertainty scores for NER predictions. Previous models for uncertainty estimation often overlook two distinctive attributes of NER: the interrelation among entities (where the learning of one entity's embedding depends on others) and the challenges posed by incorrect span predictions in entity extraction. To address these issues, we introduce the Sequential Labeling Posterior Network (SLPN), designed to estimate uncertainty scores for the extracted entities while considering uncertainty propagation from other tokens. Additionally, we have devised an evaluation methodology tailored to the specific nuances of wrong-span cases. Our fourth topic focuses on an overlooked question that persists regarding the evaluation reliability of uncertainty estimation in text summarization (UE-TS). Text summarization, a key task in natural language generation (NLG), holds significant importance, particularly in domains where inaccuracies can have serious consequences, such as healthcare. UE-TS has garnered attention due to the potential risks associated with erroneous summaries. However, the reliability of evaluating UE-TS methods raises concerns, stemming from the interdependence between uncertainty model metrics and the wide array of NLG metrics. To address these concerns, we introduce a comprehensive UE-TS benchmark incorporating twenty-six NLG metrics across four dimensions. This benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model across two datasets. Additionally, it assesses the effectiveness of fourteen common uncertainty estimation methods. Our study underscores the necessity of utilizing diverse, uncorrelated NLG metrics and uncertainty estimation techniques for a robust evaluation of UE-TS methods. / Doctor of Philosophy / Text is integral to our daily activities, appearing in various forms such as social media posts, news articles, books, and reports. We rely on text for communication, information dissemination, and decision-making. Given its ubiquity, the ability to process and understand text through Natural Language Processing (NLP) has become increasingly important. NLP technology enables us to perform tasks like text classification, which involves categorizing text into predefined labels, and named entity recognition (NER), which identifies specific entities such as names, dates, and locations within text. Additionally, NLP facilitates generating coherent and contextually appropriate responses in conversational agents, enhancing human-computer interaction. However, the reliability of NLP models is crucial, especially in sensitive applications like medical diagnoses, where errors can have severe consequences. This dissertation focuses on uncertainty estimation in NLP, a less explored but essential area. Uncertainty estimation helps evaluate the confidence of NLP model predictions. We propose new methods tailored to various NLP tasks, acknowledging their unique needs. NLP tasks are divided into natural language understanding (NLU) and natural language generation (NLG). Within NLU, we look at tasks from two perspectives: a global view (e.g., document-level text classification) and a local view (e.g., sentence-level inference and token-level entity recognition). Our research spans text classification, named entity recognition (NER), and text summarization, with a special focus on few-shot text classification due to its recent prominence. For text classification, we introduce the MSD model, which includes three components to enhance uncertainty score accuracy and address overconfidence issues. This model integrates seamlessly with different neural networks and shows consistent improvements in experiments. For few-shot text classification, we develop Contrastive Learning from Uncertainty Relations (CLUR), designed to work effectively with minimal support samples per class. CLUR autonomously learns pseudo uncertainty scores, demonstrating effectiveness with various contrastive learning components. In NER, we address the unique challenges of entity interrelation and span prediction errors. We propose the Sequential Labeling Posterior Network (SLPN) to estimate uncertainty scores while considering uncertainty propagation from other tokens. For text summarization, we create a benchmark with tens of metrics to evaluate uncertainty estimation methods across two datasets. This benchmark helps assess the reliability of these methods, highlighting the need for diverse, uncorrelated metrics. Overall, our work advances the understanding and implementation of uncertainty estimation in NLP, providing more reliable and accurate predictions across different tasks.
30

Decision making study : methods and applications of evidential reasoning and judgment analysis

Shan, Yixing January 2015 (has links)
Decision making study has been the multi-disciplinary research involving operations researchers, management scientists, statisticians, mathematical psychologists and economists as well as others. This study aims to investigate the theory and methodology of decision making research and apply them to different contexts in real cases. The study has reviewed the literature of Multiple Criteria Decision Making (MCDM), Evidential Reasoning (ER) approach, Naturalistic Decision Making (NDM) movement, Social Judgment Theory (SJT), and Adaptive Toolbox (AT) program. On the basis of these literatures, two methods, Evidence-based Trade-Off (EBTO) and Judgment Analysis with Heuristic Modelling (JA-HM), have been proposed and developed to accomplish decision making problems under different conditions. In the EBTO method, we propose a novel framework to aid people s decision making under uncertainty and imprecise goal. Under the framework, the imprecise goal is objectively modelled through an analytical structure, and is independent of the task requirement; the task requirement is specified by the trade-off strategy among criteria of the analytical structure through an importance weighting process, and is subject to the requirement change of a particular decision making task; the evidence available, that could contribute to the evaluation of general performance of the decision alternatives, are formulated with belief structures which are capable of capturing various format of uncertainties that arise from the absence of data, incomplete information and subjective judgments. The EBTO method was further applied in a case study of Soldier system decision making. The application has demonstrated that EBTO, as a tool, is able to provide a holistic analysis regarding the requirements of Soldier missions, the physical conditions of Soldiers, and the capability of their equipment and weapon systems, which is critical in domain. By drawing the cross-disciplinary literature from NDM and AT, the JA-HM extended the traditional Judgment Analysis (JA) method, through a number of novel methodological procedures, to account for the unique features of decision making tasks under extreme time pressure and dynamic shifting situations. These novel methodological procedures include, the notion of decision point to deconstruct the dynamic shifting situations in a way that decision problem could be identified and formulated; the classification of routine and non-routine problems, and associated data alignment process to enable meaningful decision data analysis across different decision makers (DMs); the notion of composite cue to account for the DMs iterative process of information perception and comprehension in dynamic task environment; the application of computational models of heuristics to account for the time constraints and process dynamics of DMs decision making process; and the application of cross-validation process to enable the methodological principle of competitive testing of decision models. The JA-HM was further applied in a case study of fire emergency decision making. The application has been the first behavioural test of the validity of the computational models of heuristics, in predicting the DMs decision making during fire emergency response. It has also been the first behavioural test of the validity of the non-compensatory heuristics in predicting the DMs decisions on ranking task. The findings identified extend the literature of AT and NDM, and have implications for the fire emergency decision making.

Page generated in 0.0473 seconds