• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 55
  • 24
  • 8
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 325
  • 325
  • 116
  • 106
  • 92
  • 72
  • 70
  • 70
  • 60
  • 58
  • 52
  • 50
  • 47
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Designing a framework for simulating radiology information systems

Lindblad, Erik January 2008 (has links)
<p>In this thesis, a very flexible framework for simulating RIS is designed to beused for Infobroker testing. Infobroker is an application developed by MawellSvenska AB that connects RIS and PACS to achieve interoperability by enablingimage and journal data transmission between radiology sites. To put the project in context, the field of medical informatics, RIS and PACS systems and common protocols and standards are explored. A proof-of-concept implementation of the proposed design shows its potential and verifies that it works. The thesis concludes that a more specialized approach is preferred.</p>
42

Toward a novel predictive analysis framework for new-generation clinical decision support systems

Mazzocco, Thomas January 2014 (has links)
The idea of developing automated tools able to deal with the complexity of clinical information processing dates back to the late 60s: since then, there has been scope for improving medical care due to the rapid growth of medical knowledge, and the need to explore new ways of delivering this due to the shortage of physicians. Clinical decision support systems (CDSS) are able to aid in the acquisition of patient data and to suggest appropriate decisions on the basis of the data thus acquired. Many improvements are envisaged due to the adoption of such systems including: reduction of costs by faster diagnosis, reduction of unnecessary examinations, reduction of risk of adverse events and medication errors, increase in the available time for direct patient care, improved medications and examination prescriptions, improved patient satisfaction, and better compliance to gold-standard up-to-date clinical pathways and guidelines. Logistic regression is a widely used algorithm which frequently appears in medical literature for building clinical decision support systems: however, published studies frequently have not followed commonly recommended procedures for using logistic regression and substantial shortcomings in the reporting of logistic regression results have been noted. Published literature has often accepted conclusions from studies which have not addressed the appropriateness and accuracy of the statistical analyses and other methodological issues, leading to design flaws in those models and to possible inconsistencies in the novel clinical knowledge based on such results. The main objective of this interdisciplinary work is to design a sound framework for the development of clinical decision support systems. We propose a framework that supports the proper development of such systems, and in particular the underlying predictive models, identifying best practices for each stage of the model’s development. This framework is composed of a number of subsequent stages: 1) dataset preparation insures that appropriate variables are presented to the model in a consistent format, 2) the model construction stage builds the actual regression (or logistic regression) model determining its coefficients and selecting statistically significant variables; this phase is generally preceded by a pre-modelling stage during which model functional forms are hypothesized based on a priori knowledge 3) the further model validation stage investigates whether the model could suffer from overfitting, i.e., the model has a good accuracy on training data but significantly lower accuracy on unseen data, 4) the evaluation stage gives a measure of the predictive power of the model (making use of the ROC curve, which allows to evaluate the predictive power of the model without any assumptions on error costs, and possibly R2 from regressions), 5) misclassification analysis could suggest useful insights into determining where the model could be unreliable, 6) implementation stage. The proposed framework has been applied to three applications on different domains, with a view to improve previous research studies. The first developed model predicts mortality within 28 days of patients suffering from acute alcoholic hepatitis. The aim of this application is to build a new predictive model that can be used in clinical practice to identify patients at greatest risk of mortality in 28 days as they may benefit from aggressive intervention, and to monitor their progress while in hospital. A comparison generated by state of the art tools shows an improved predictive power, demonstrating how an appropriate variables inclusion may result in an overall better accuracy of the model, which increased by 25% following an appropriate variables selection process. The second proposed predictive model is designed to aid the diagnosis of dementia, as clinicians often experience difficulties in the diagnosis of dementia due to the intrinsic complexity of the process and lack of comprehensive diagnostic tools. The aim of this application is to improve on the performance of a recent application of Bayesian belief networks using an alternative approach based on logistic regression. The approach based on statistical variables selection outperformed the model which used variables selected by domain experts in previous studies. Obtained results outperform considered benchmarks by 15%. The third built model predicts the probability of experiencing a certain symptom among common side-effects in patients receiving chemotherapy. The newly developed model includes a pre-modelling stage (which was based on previous research studies) and a subsequent regression. The computed accuracy of results (computed on a daily basis for each cycle of therapy) shows that the newly proposed approach has increased its predictive power by 19% when compared to the previously developed model: this has been obtained by an appropriate usage of available a priori knowledge to pre-model the functional forms. As shown by the proposed applications, different aspects of CDSS development are subject to substantial improvements: the application of the proposed framework to different domains leads to more accurate models than the existing state-of-the-art proposals. The developed framework is capable of helping researchers to identify and overcome possible pitfalls in their ongoing research works, by providing them with best practices for each step of the development process. An impact on the development of future clinical decision support systems is envisaged: the usage of an appropriate procedure in model development will produce more reliable and accurate systems, and will have a positive impact on the newly produced medical knowledge which may eventually be included in standard clinical practice.
43

Automated question answering for clinical comparison questions

Leonhard, Annette Christa January 2012 (has links)
This thesis describes the development and evaluation of new automated Question Answering (QA) methods tailored to clinical comparison questions that give clinicians a rank-ordered list of MEDLINE® abstracts targeted to natural language clinical drug comparison questions (e.g. ”Have any studies directly compared the effects of Pioglitazone and Rosiglitazone on the liver?”). Three corpora were created to develop and evaluate a new QA system for clinical comparison questions called RetroRank. RetroRank takes the clinician’s plain text question as input, processes it and outputs a rank-ordered list of potential answer candidates, i.e. MEDLINE® abstracts, that is reordered using new post-retrieval ranking strategies to ensure the most topically-relevant abstracts are displayed as high in the result set as possible. RetroRank achieves a significant improvement over the PubMed recency baseline and performs equal to or better than previous approaches to post-retrieval ranking relying on query frames and annotated data such as the approach by Demner-Fushman and Lin (2007). The performance of RetroRank shows that it is possible to successfully use natural language input and a fully automated approach to obtain answers to clinical drug comparison questions. This thesis also introduces two new evaluation corpora of clinical comparison questions with “gold standard” references that are freely available and are a valuable resource for future research in medical QA.
44

Trust and Trustworthiness: A Framework for Successful Design of Telemedicine

Templeton, James Robert 01 January 2010 (has links)
Trust and its antecedents have been demonstrated as a barrier to the successful adoption of numerous fields of technology, most notably e-commerce, and may be a key factor in the lack of adoption or adaptation in the field of telemedicine. In the medical arena, trust is often formed through the relationships cultivated over time via clinician and patient. Trust and interpersonal relationships may also play a significant role in the adoption of telemedicine. The idea of telemedicine has been explored for nearly 30 years in one form or another. Yet, despite grandiose promises of how it will someday significantly improve the healthcare system, the field continues to lag behind other areas of technology by 10 to 15 years. The reasons for the lack of adoption may be many given the barriers that have been observed by other researchers with regards to trust and trustworthiness. This study examined the role of trust from various aspects within telemedicine, with particular emphasis on the role that trust plays in the adoption and adaptation of a telemedicine system. Simulators examined the role of trust in the treatment and management of diabetes mellitus (common illness) in order to assess the impact and role of trust components. Surveys of the subjects were conducted to capture the trust dynamics, as well as the development of a framework for successful implementation of telemedicine using trust and trustworthiness as a foundation. Results indicated that certain attributes do influence the level of trust in the system. The framework developed demonstrated that medical content, disease state management, perceived patient outcomes, and design all had significant impact on trust of the system.
45

Computational Toxinology

Romano, Joseph Daniel January 2019 (has links)
Venoms are complex mixtures of biological macromolecules and other compounds that are used for predatory and defensive purposes by hundreds of thousands of known species worldwide. Throughout human history, venoms and venom components have been used to treat a vast array of illnesses, causing them to be of great clinical, economic, and academic interest to the drug discovery and toxinology communities. In spite of major computational advances that facilitate data-driven drug discovery, most therapeutic venom effects are still discovered via tedious trial-and-error, or simply by accident. In this dissertation, I describe a body of work that aims to establish a new subdiscipline of translational bioinformatics, which I name “computational toxinology”. To accomplish this goal, I present three integrated components that span a wide range of informatics techniques: (1) VenomKB, (2) VenomSeq, and (3) VenomKB’s Semantic API. To provide a platform for structuring, representing, retrieving, and integrating venom data relevant to drug discovery, VenomKB provides a database-backed web application and knowledge base for computational toxinology. VenomKB is structured according to a fully-featured ontology of venoms, and provides data aggregated from many popular web re- sources. VenomSeq is a biotechnology workflow that is designed to generate new high-throughput sequencing data for incorporation into VenomKB. Specifically, we expose human cells to controlled doses of crude venoms, conduct RNA-Sequencing, and build profiles of differential gene expression, which we then compare to publicly-available differential expression data for known dis- eases and drugs with known effects, and use those comparisons to hypothesize ways that the venoms could act in a therapeutic manner, as well. These data are then integrated into VenomKB, where they can be effectively retrieved and evaluated using existing data and known therapeutic associations. VenomKB’s Semantic API further develops this functionality by providing an intelligent, powerful, and user-friendly interface for querying the complex underlying data in VenomKB in a way that reflects the intuitive, human-understandable mean- ing of those data. The Semantic API is designed to cater to the needs of advanced users as well as laypersons and bench scientists without previous expertise in computational biology and semantic data analysis. In each chapter of the dissertation, I describe how we evaluated these 3 components through various approaches. We demonstrate the utility of VenomKB and the Semantic API by testing a number of practical use-cases for each, designed to highlight their ability to rediscover existing knowledge as well as suggesting potential areas for future exploration. We use statistics and data science techniques to evaluate VenomSeq on 25 diverse species of venomous animals, and propose biologically feasible explanations for significant findings. In evaluating the Semantic API, I show how observations on VenomSeq data can be interpreted and placed into the context of past research by members of the larger toxinology community. Computational toxinology is a toolbox designed to be used by multiple stakeholders (toxinologists, computational biologists, and systems pharmacologists, among others) to improve the return rate of clinically-significant findings from manual experimentation. It aims to achieve this goal by enabling access to data, providing means for easy validation of results, and suggesting specific hypotheses that are preliminarily supported by rigorous inferential statistics. All components of the research I describe are open-access and publicly available, to improve reproducibility and encourage widespread adoption
46

Medical data mining using evolutionary computation.

January 1998 (has links)
by Ngan Po Shun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 109-115). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Data Mining --- p.1 / Chapter 1.2 --- Motivation --- p.4 / Chapter 1.3 --- Contributions of the research --- p.5 / Chapter 1.4 --- Organization of the thesis --- p.6 / Chapter 2 --- Related Work in Data Mining --- p.9 / Chapter 2.1 --- Decision Tree Approach --- p.9 / Chapter 2.1.1 --- ID3 --- p.10 / Chapter 2.1.2 --- C4.5 --- p.11 / Chapter 2.2 --- Classification Rule Learning --- p.13 / Chapter 2.2.1 --- AQ algorithm --- p.13 / Chapter 2.2.2 --- CN2 --- p.14 / Chapter 2.2.3 --- C4.5RULES --- p.16 / Chapter 2.3 --- Association Rule Mining --- p.16 / Chapter 2.3.1 --- Apriori --- p.17 / Chapter 2.3.2 --- Quantitative Association Rule Mining --- p.18 / Chapter 2.4 --- Statistical Approach --- p.19 / Chapter 2.4.1 --- Chi Square Test and Bayesian Classifier --- p.19 / Chapter 2.4.2 --- FORTY-NINER --- p.21 / Chapter 2.4.3 --- EXPLORA --- p.22 / Chapter 2.5 --- Bayesian Network Learning --- p.23 / Chapter 2.5.1 --- Learning Bayesian Networks using the Minimum Descrip- tion Length (MDL) Principle --- p.24 / Chapter 2.5.2 --- Discretizating Continuous Attributes while Learning Bayesian Networks --- p.26 / Chapter 3 --- Overview of Evolutionary Computation --- p.29 / Chapter 3.1 --- Evolutionary Computation --- p.29 / Chapter 3.1.1 --- Genetic Algorithm --- p.30 / Chapter 3.1.2 --- Genetic Programming --- p.32 / Chapter 3.1.3 --- Evolutionary Programming --- p.34 / Chapter 3.1.4 --- Evolution Strategy --- p.37 / Chapter 3.1.5 --- Selection Methods --- p.38 / Chapter 3.2 --- Generic Genetic Programming --- p.39 / Chapter 3.3 --- Data mining using Evolutionary Computation --- p.43 / Chapter 4 --- Applying Generic Genetic Programming for Rule Learning --- p.45 / Chapter 4.1 --- Grammar --- p.46 / Chapter 4.2 --- Population Creation --- p.49 / Chapter 4.3 --- Genetic Operators --- p.50 / Chapter 4.4 --- Evaluation of Rules --- p.52 / Chapter 5 --- Learning Multiple Rules from Data --- p.56 / Chapter 5.1 --- Previous approaches --- p.57 / Chapter 5.1.1 --- Preselection --- p.57 / Chapter 5.1.2 --- Crowding --- p.57 / Chapter 5.1.3 --- Deterministic Crowding --- p.58 / Chapter 5.1.4 --- Fitness sharing --- p.58 / Chapter 5.2 --- Token Competition --- p.59 / Chapter 5.3 --- The Complete Rule Learning Approach --- p.61 / Chapter 5.4 --- Experiments with Machine Learning Databases --- p.64 / Chapter 5.4.1 --- Experimental results on the Iris Plant Database --- p.65 / Chapter 5.4.2 --- Experimental results on the Monk Database --- p.67 / Chapter 6 --- Bayesian Network Learning --- p.72 / Chapter 6.1 --- The MDLEP Learning Approach --- p.73 / Chapter 6.2 --- Learning of Discretization Policy by Genetic Algorithm --- p.74 / Chapter 6.2.1 --- Individual Representation --- p.76 / Chapter 6.2.2 --- Genetic Operators --- p.78 / Chapter 6.3 --- Experimental Results --- p.79 / Chapter 6.3.1 --- Experiment 1 --- p.80 / Chapter 6.3.2 --- Experiment 2 --- p.82 / Chapter 6.3.3 --- Experiment 3 --- p.83 / Chapter 6.3.4 --- Comparison between the GA approach and the greedy ap- proach --- p.91 / Chapter 7 --- Medical Data Mining System --- p.93 / Chapter 7.1 --- A Case Study on the Fracture Database --- p.95 / Chapter 7.1.1 --- Results of Causality and Structure Analysis --- p.95 / Chapter 7.1.2 --- Results of Rule Learning --- p.97 / Chapter 7.2 --- A Case Study on the Scoliosis Database --- p.100 / Chapter 7.2.1 --- Results of Causality and Structure Analysis --- p.100 / Chapter 7.2.2 --- Results of Rule Learning --- p.102 / Chapter 8 --- Conclusion and Future Work --- p.106 / Bibliography --- p.109 / Chapter A --- The Rule Sets Discovered --- p.116 / Chapter A.1 --- The Best Rule Set Learned from the Iris Database --- p.116 / Chapter A.2 --- The Best Rule Set Learned from the Monk Database --- p.116 / Chapter A.2.1 --- Monkl --- p.116 / Chapter A.2.2 --- Monk2 --- p.117 / Chapter A.2.3 --- Monk3 --- p.119 / Chapter A.3 --- The Best Rule Set Learned from the Fracture Database --- p.120 / Chapter A.3.1 --- Type I Rules: About Diagnosis --- p.120 / Chapter A.3.2 --- Type II Rules : About Operation/Surgeon --- p.120 / Chapter A.3.3 --- Type III Rules : About Stay --- p.122 / Chapter A.4 --- The Best Rule Set Learned from the Scoliosis Database --- p.123 / Chapter A.4.1 --- Rules for Classification --- p.123 / Chapter A.4.2 --- Rules for Treatment --- p.126 / Chapter B --- The Grammar used for the fracture and Scoliosis databases --- p.128 / Chapter B.1 --- The grammar for the fracture database --- p.128 / Chapter B.2 --- The grammar for the Scoliosis database --- p.128
47

A Visual Approach to Improving the Experience of Health Information for Vulnerable Individuals

Woollen, Janet January 2018 (has links)
Many individuals with low health literacy (LHL) and limited English proficiency (LEP) have poor experiences consuming health information: they find it unengaging, unappealing, difficult to understand, and un-motivating. These negative experiences may blunt, or even sabotage, the desired effect of communicating health information: to increase engagement and ability to manage health. It is imperative to find solutions to improve poor experiences of health information, because such experiences heighten vulnerability to poor health outcomes. We aimed to address a gap in the health literacy literature by studying the patient experience of health information and how visualization might be able to help. Our four studies involved patients presented with health information in various settings to improve understanding and management of their care. We used semi-structured interviews and observations to understand patient experiences of receiving personal health information in the hospital. We learned that the return of results is desired and has the potential to promote patient engagement with care. We developed a novel method to analyze LHL, LEP caregiver experience and information needs in the community setting. The novel method increased our understanding and ability to detect differences in experiences within the same ethnic group, based on language preference. Next, we interrogated the literature for a solution to easily communicate complicated health information to disinterested, LHL, LEP individuals. We found that visualizations can help increase interest, comprehension, support faster communication, and even help broach difficult topics. Finally, our findings were used to develop a novel prototype to improve experiences of consuming genetic risk information for those having LHL and LEP. Unlike traditional approaches that focus on communicating risk numbers and probabilities, the novelty of our approach was that we focused on communicating risk as a feeling. We achieved this by leveraging vicarious learning via real patient experience materials (e.g., quotes, videos) and empathy with an emotive relational agent. We evaluated and compared the prototype to standard methods of communicating genetic risk information via a mixed methods approach that included surveys, questionnaires, interviews, observations, image analysis, and facial analysis. Main outcome variables were perceived ease of understanding, comprehension, emotional response, and motivation. We employed t-tests, ANOVAs, directed content analysis, correlation, regression, hierarchical clustering, and Chernoff faces to answer the research questions. All variables were significantly different for the prototype compared to the standard method, except for motivation as rated by 32 LHL, LEP community members. Findings revealed that LHL, LEP individuals have difficulty appropriately processing standard methods of communicating risk information, such as risk numbers supported by visual aids. Further, appealing visuals may inappropriately increase confidence in understanding of information. Visualizations affected emotions, which influenced perceived ease of understanding and motivation to take action on the information. Comprehension scores did not correlate with perceived ease of understanding, emotional response, or motivation. Findings suggest that providing access to comprehensible health information may not be enough to motivate patients to engage with their care; providing a good experience (taking into account the aesthetics and emotional response) of health information may be essential to optimize outcomes.
48

Phenotyping Endometriosis from Observational Health Data

McKillop, Mollie January 2019 (has links)
The signs and symptoms of many diseases remain poorly characterized. For these types of conditions, the constellation of symptoms experienced by patients are not adequately described, nor are the signs and symptoms specific to the condition well-defined. These features define an enigmatic disease. One of the most prevalent yet enigmatic conditions today is endometriosis, described as when endometrial-like cells grow outside of the uterus. Largely because of the wide, unexplained variation in patient symptoms, beyond the surgical definition of the disease, and the lack of noninvasive diagnostic biomarkers, there exists a significant delay in diagnosis. Better characterization of enigmatic diseases like endometriosis should lead us towards more accurate and earlier disease diagnosis. In informatics, characterizing a condition is phenotyping. For a prevalent condition for which the the symptomatic experience is highly heterogeneous, this process involves the use of data-driven methods to describe group-specific patterns to better explain this heterogeneity. Traditional data sources for phenotyping include observational health data like electronic health records (EHR) and administrative claims. Collecting data longitudinally and designing data collection so it is relevant to the patient experience may provide a complementary characterization of the condition useful for phenotyping. Alternative data sources such as patient-generated health data from self-tracking devices may elucidate, over time, a wider range of signs and symptoms of the disease at a more granular level than traditional phenotyping data sources. Patient-generated health data, however, remains an unexplored data source for disease phenotyping of enigmatic conditions like endometriosis. This thesis explores the following research questions: 1) To what extent are traditional data sources representative of endometriosis? 2) How should researchers design a self-tracking app for endometriosis that is engaging for the user and supports phenotyping at scale? 3) What computational methods can help phenotype endometriosis at scale from self-tracking data, and 4) can the disease be detected earlier with a validated EHR phenotype? First, the disease dimensions relevant to endometriosis are elicited from both traditional observational health data sources and from patients directly. Second, using these dimensions, a self-tracking app for endometriosis is designed to be both engaging to the user and to facilitate disease phenotyping across a patient population. The app is then developed using a standard software framework, and patients are recruited to use the self-tracking app. Third, using self-tracking data and traditional phenotyping data sources, such as claims and EHRs, computational methods for identifying subtypes of the disease and for early disease detection are explored. This thesis contributes the following: 1) Using over 1,400 patient records for manual chart review, a validated, reproducible, and portable endometriosis cohort definition for selecting patients from both claims and EHR data with a sensitivity (recall) of 70%, specificity of 93%, and positive predictive value (precision) of 85% is developed. Using this definition, a characterization of the disease to help with early disease detection is elucidated using over two million endometriosis patients across institutions and settings. 2) A self-tracking app (Phendo) that supports further characterization of the disease at scale has been designed and developed and is currently used by over 6,000 endometriosis patients from over 70 countries. 3) Data from this app has been used to identify three novel subtypes of the disease that are clinically meaningful, interpretable, and correlate with what is known about the condition from a gold-standard clinical survey. 4) Leveraging the cohort definition characterization for earlier disease detection, a well-performing prediction model, with area under the curve of 68.6%, for early identification of endometriosis has been trained and tested across a network of observational health databases.
49

Deciphering clinical text : concept recognition in primary care text notes

Savkov, Aleksandar Dimitrov January 2017 (has links)
Electronic patient records, containing data about the health and care of a patient, are a valuable source of information for longitudinal clinical studies. The General Practice Research Database (GPRD) has collected patient records from UK primary care practices since the late 1980s. These records contain both structured data (in the form of codes and numeric values) and free text notes. While the structured data have been used extensively in clinical studies, there are significant practical obstacles in extracting information from the free text notes. The main obstacles are data access restrictions, due to the presence of sensitive information, and the specific language of medical practitioners, which renders standard language processing tools ineffective. The aim of this research is to investigate approaches for computer analysis of free text notes. The research involved designing a primary care text corpus (the Harvey Corpus) annotated with syntactic chunks and clinically-relevant semantic entities, developing a statistical chunking model, and devising a novel method for applying machine learning for entity recognition based on chunk annotation. The tools produced would facilitate reliable information extraction from primary care patient records, needed for the development of clinically-related research. The three medical concept types targeted in this thesis could contribute to epidemiological studies by enhancing the detection of co-morbidities, and better analysing the descriptions of patient experiences and treatments. The main contributions of the research reported in this thesis are: guidelines for chunk and concept annotation of clinical text, an approach to maximising agreement between human annotators, the Harvey Corpus, a method for using a standard part-of-speech tagging model in clinical text chunking, and a novel approach to recognising clinically relevant medical concepts.
50

Fuzzy ontology and intelligent systems for discovery of useful medical information

Parry, David Tudor Unknown Date (has links)
Currently, reliable and appropriate medical information is difficult to find on the Internet. The potential for improvement in human health by the use of internet-based sources of information is potentially huge, as knowledge becomes more widely available, at much lower cost. Medical information has traditionally formed a large part of academic publishing. However, the increasing volume of available information, along with the demand for evidence based medicine makes Internet sources of information appear to be the only practical source of comprehensive and up-to date information. The aim of this work is to develop a system allowing groups of users to identify information that they find useful, and using those particular sources as examples develop an intelligent system that can classify new information sources in terms of their likely usefulness to such groups. Medical information sources are particularly interesting because they cover a very wide range of specialties, they require very strict quality control, and the consequence of error may be extremely serious, in addition, medical information sources are of increasing interest to the general public. This work covers the design, construction and testing of such a system and introduces two new concepts - document structure identification via information entropy and fuzzy ontology for knowledge representation. A mapping between query terms and members of ontology is usually a key part of any ontology enhanced searching tool. However many terms used in queries may be overloaded in terms of the ontology, which limits the potential use of automatic query expansion and refinement. In particular this problem affects information systems where different users are likely to expect different meanings for the same term. This thesis describes the derivation and use of a "Fuzzy Ontology" which uses fuzzy relations between components of the ontology in order to preserve a common structure. The concept is presented in the medical domain. Kolmogorov distance calculations are used to identify similarity between documents in terms of authorship, origin and topic. In addition structural measures such as paragraph tags were examined but found not to be effective in clustering documents. The thesis describes some theoretical and practical evaluation of these approaches in the context of a medical information retrieval system, designed to support ontology-based search refinement, relevance feedback and preference sharing between professional groups.

Page generated in 0.1074 seconds