• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 21
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Module Extraction and Incremental Classification: A Pragmatic Approach for EL ⁺ Ontologies

Suntisrivaraporn, Boontawee 16 June 2022 (has links)
The description logic EL⁺ has recently proved practically useful in the life science domain with presence of several large-scale biomedical ontologies such as Snomed ct. To deal with ontologies of this scale, standard reasoning of classification is essential but not sufficient. The ability to extract relevant fragments from a large ontology and to incrementally classify it has become more crucial to support ontology design, maintenance and reuse. In this paper, we propose a pragmatic approach to module extraction and incremental classification for EL⁺ ontologies and report on empirical evaluations of our algorithms which have been implemented as an extension of the CEL reasoner.
12

Standardizing our perinatal language to facilitate data sharing

Massey, Kiran Angelina 05 1900 (has links)
Our ultimate goal as obstetric and neonatal care providers is to improve care for mothers and their babies. Continuous quality improvement (CQI) involves iterative cycles of practice change and audit of ongoing clinical care identifying practices that are associated with good outcomes. A vital prerequisite to this evidence based medicine is data collection. In Canada, much of the country is covered by separate fragmented silos known as regional reproductive care databases or perinatal health programs. A more centralized system which includes collaborative efforts is required. Moving in this direction would serve many purposes: efficiency, economy in the setting of limited resources and shrinking budgets and lastly, interaction among data collection agencies. This interaction may facilitate translation and transfer of knowledge to care-givers and patients. There are however many barriers towards such collaborative efforts including privacy, ownership and the standardization of both digital technologies and semantics. After thoroughly examining the current existing perinatal data collection among Perinatal Health Programs (PHPs), and the Canadian Perinatal Network (CPN) database, it was evident that there is little standardization of definitions. This serves as one of the most important barriers towards data sharing. To communicate effectively and share data, researchers and clinicians alike must construct a common perinatal language. Communicative tools and programs such as SNOMED CT® offer a potential solution, but still require much work due to their infancy. A standardized perinatal language would not only lay the definitional foundation in women’s health and obstetrics but also serve as a major contribution towards a universal electronic health record.
13

Standardizing our perinatal language to facilitate data sharing

Massey, Kiran Angelina 05 1900 (has links)
Our ultimate goal as obstetric and neonatal care providers is to improve care for mothers and their babies. Continuous quality improvement (CQI) involves iterative cycles of practice change and audit of ongoing clinical care identifying practices that are associated with good outcomes. A vital prerequisite to this evidence based medicine is data collection. In Canada, much of the country is covered by separate fragmented silos known as regional reproductive care databases or perinatal health programs. A more centralized system which includes collaborative efforts is required. Moving in this direction would serve many purposes: efficiency, economy in the setting of limited resources and shrinking budgets and lastly, interaction among data collection agencies. This interaction may facilitate translation and transfer of knowledge to care-givers and patients. There are however many barriers towards such collaborative efforts including privacy, ownership and the standardization of both digital technologies and semantics. After thoroughly examining the current existing perinatal data collection among Perinatal Health Programs (PHPs), and the Canadian Perinatal Network (CPN) database, it was evident that there is little standardization of definitions. This serves as one of the most important barriers towards data sharing. To communicate effectively and share data, researchers and clinicians alike must construct a common perinatal language. Communicative tools and programs such as SNOMED CT® offer a potential solution, but still require much work due to their infancy. A standardized perinatal language would not only lay the definitional foundation in women’s health and obstetrics but also serve as a major contribution towards a universal electronic health record.
14

Interoperability and information system replacement in the health sector

Pusatli, Ozgur Tolga January 2009 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / It is difficult to decide when to replace (major components of) information systems (IS) used in large organisations. Obstacles include not only the cost and the technical complexities but also the fact that the workplace is dependent on the current IS and the users have familiarity with their functionalities. The problems become more complicated with increasing need for IS interconnectivity within and between organisations. Formal guidelines to assist in making replacement decisions are not commonly used. This thesis aims to develop a model of key factors involved in the IS replacement decision and to investigate the role of interoperability in this decision. It concentrates on the healthcare domain in NSW, Australia, which represents a complex distributed multilevel organisation, which has identified interoperability as a problem and has started initiatives to improve it. Research in IS and software engineering has shed light on many of the issues associated with the replacement decision. For example, studies in technology acceptance have explained why organisations delay in moving to new technologies, and modelled the effect of increasing popularity of such technologies. IS success models have explored the factors that contribute to success and failure of deployed systems, providing checklists to assess the appropriateness of current systems from the point of view of the users and other organisational stakeholders. Research into the value of user feedback has helped managers to align user expectations with workplace IS. In terms of software function, metrics have been developed to measure a range of factors including performance, usability, efficiency and reliability that help determine how well the systems are performing from a technical perspective. Additional research has identified important points to consider when comparing custom made systems versus buying off-the-shelf systems, such as skill availability and after sale support. Maturity models and life cycle analyses consider the effect of age on IS, and Lehman’s laws of software evolution highlight the need for maintenance if an IS is to survive. Improvements in interoperability at the information level have been achieved through domain specific standards for data integrity, and modular approaches for partial changes in IS. In particular, the healthcare domain has developed a number of standardised terminological systems such as SNOMED, LOINC, ICD and messaging standards such as HL7. Template high level data models have also been trialled as a way to ensure new IS remain compatible with existing systems. While this literature partially covers and contributes to the understanding of when and how to replace IS and/or components, to our knowledge there has been no attempt to provide an integrated model identifying factors to be considered in the replacement decision. The thesis adopts a multi method approach to build a model of IS replacement and to explore aspects of interoperability. Preliminary factors and their potential interactions were first identified from the literature. In depth interviews were conducted with 10 experts and 2 IS users to investigate the validity and importance of the factors and interactions and to elicit further potential items. The analysis of the transcripts guided review of further literature and contemporary data, which led to the development of a final model and insights into the role of interoperability. A member check was used to validate both the model and the researcher’s conclusions on interoperability. The final model is centred about the change request, that is, any request made by or on behalf of an executive officer in order to maintain or replace part or all of an IS. The change request is informed by user feedback but our research distinguishes the two factors because the change request factor filters and manages requests for change from multiple sources. Other factors that have an important direct or indirect effect on generating change requests include: the extent of system specialisation, that is, how the system is tailored to satisfy organisational requirements; popularity, the degree to which an IS or a component is liked or supported by its user community; the prevalence and severity of errors and failures in the systems; the usability and performance of the systems; and the adequacy of support, including training, documentation, and so on. The dependent factors are maintenance and replacement, determined through the change requests. The validation through member checking showed that IS practitioners found our model useful in explaining the replacement process. The model provided an interpretation of the change requests. By exposing and clustering reasons behind the change requests, the complexity of deciding whether to maintain or replace system components can be reduced. Individual factors can be addressed more specifically. Formal guidelines on whether to maintain or replace components or entire IS can be drawn up using this understanding. The factors and their interactions as explained in the model could be the basis of a decision tree, which would be customised for organisational jargon and priorities. The requirement for interoperability is an aspect of system specialisation. An important finding from the research was that one of the most significant reasons to change a system is when problems are encountered in exchanging data and information. Conversely, as long as systems can exchange data, there is less pressure to replace them. Organisations benefit more from systems that provide more support for interoperability. Findings on interoperability in the health domain were that existing messaging standards (mostly HL7) used in the information exchange between subsystems including legacy databases are useful and are used. Also, ambiguities are diminished with vocabularies (mostly SNOMED, LOINC and ICD are used in NSW health domain). However, a methodology known as Interoperability Framework supported by government funding bodies for comparing data models has not been adopted and is not given any significant credit by the users. Likewise, a government proposal to use an overarching high level data model has not been adopted, in part because it is too complex. To guide use of such a data model requires a methodology for comparing data models: an example of such a methodology is developed in this thesis. The thesis research found that replacement decisions in the healthcare domain are affected by the existing quasi-monopoly of large vendors which tend to use proprietary standards that limit interoperability. The research concludes that interoperability should be achieved by increased use of vendor-independent messaging and terminological standards. In order to get the co-operation of individual health institutions within the domain, initial investments should be concentrated on simple and easy to adopt standards. A primary limitation of this thesis is the extent of testing of the findings. Data from a broader range of organisations, in different sectors and different countries, is needed to validate the model and to guide development of decision making tools that are based on it. Particularly valuable would be case studies of IS replacement decision making and the process which executives use in determining change requests. The priorities of the factors and their attributes as well as the strengths of the relationships in the model need to be tested empirically using tailored survey instruments. Another interesting research avenue which was only touched on in the thesis was the effect of policies and legislation on interoperability and on replacement decisions.
15

Standardizing our perinatal language to facilitate data sharing

Massey, Kiran Angelina 05 1900 (has links)
Our ultimate goal as obstetric and neonatal care providers is to improve care for mothers and their babies. Continuous quality improvement (CQI) involves iterative cycles of practice change and audit of ongoing clinical care identifying practices that are associated with good outcomes. A vital prerequisite to this evidence based medicine is data collection. In Canada, much of the country is covered by separate fragmented silos known as regional reproductive care databases or perinatal health programs. A more centralized system which includes collaborative efforts is required. Moving in this direction would serve many purposes: efficiency, economy in the setting of limited resources and shrinking budgets and lastly, interaction among data collection agencies. This interaction may facilitate translation and transfer of knowledge to care-givers and patients. There are however many barriers towards such collaborative efforts including privacy, ownership and the standardization of both digital technologies and semantics. After thoroughly examining the current existing perinatal data collection among Perinatal Health Programs (PHPs), and the Canadian Perinatal Network (CPN) database, it was evident that there is little standardization of definitions. This serves as one of the most important barriers towards data sharing. To communicate effectively and share data, researchers and clinicians alike must construct a common perinatal language. Communicative tools and programs such as SNOMED CT® offer a potential solution, but still require much work due to their infancy. A standardized perinatal language would not only lay the definitional foundation in women’s health and obstetrics but also serve as a major contribution towards a universal electronic health record. / Medicine, Faculty of / Obstetrics and Gynaecology, Department of / Graduate
16

Extracting Structured Data from Free-Text Clinical Notes : The impact of hierarchies in model training / Utvinna strukturerad data från fri-text läkaranteckningar : Påverkan av hierarkier i modelträning

Omer, Mohammad January 2021 (has links)
Diagnosis code assignment is a field that looks at automatically assigning diagnosis codes to free-text clinical notes. Assigning a diagnosis code to clinical notes manually needs expertise and time. Being able to do this automatically makes getting structured data from free-text clinical notes in Electronic Health Records easier. Furthermore, it can also be used as decision support for clinicians where they can input their notes and get back diagnosis codes as a second opinion. This project investigates the effects of using the hierarchies the diagnosis codes are structured in when training the diagnosis code assignment models compared to models trained with a standard loss function, binary cross-entropy. This has been done by using the hierarchy of two systems of diagnosis codes, ICD-9 and SNOMED CT, where one hierarchy is more detailed than the other. The results showed that hierarchical training increased the recall of the models regardless of what hierarchy was used. The more detailed hierarchy, SNOMED CT, increased the recall more than what the use of the less detailed ICD-9 hierarchy did. However, when using the more detailed SNOMED CT hierarchy the precision of the models decreased while the differences in precision when using the ICD-9 hierarchy was not statistically significant. The increase in recall did not make up for the decrease in precision when training with the SNOMED CT hierarchy when looking at the F1-score that is the harmonic mean of the two metrics. The conclusions from these results are that using a more detailed hierarchy increased the recall of the model more than when using a less detailed hierarchy. However, the overall performance measured in F1-score decreased when using a more detailed hierarchy since the other metric, precision, decreased by more than what recall increased. The use of a less detailed hierarchy maintained its precision giving an increase in overall performance. / Diagnoskodstilldeling är ett fält som undersöker hur man automatiskt kan tilldela diagnoskoder till fri-text läkaranteckningar. En manuell tildeling kräver expertis och mycket tid. Förmågan att göra detta automatiskt förenklar utvinning av strukturerad data från fri-text läkaranteckningar i elektroniska patientjournaler. Det kan även användas som ett hjälpverktyg för läkare där de kan skriva in sina läkaranteckningar och få tillbaka diagnoskoder som en andra åsikt. Detta arbete undersöker effekterna av att ta användning av hierarkierna diagnoskoderna är strukturerade i när man tränar modeller för diagnoskodstilldelning jämfört med att träna modellerna med en vanlig loss-funktion. Det här kommer att göras genom att använda hierarkierna av två diagnoskod-system, SNOMED CT och ICD-9, där en av hierarkierna är mer detaljerad. Resultaten visade att hierarkisk träning ökade recall för modellerna med båda hierarkierna. Den mer detaljerade hierarkien, SNOMED CT, gav en högre ökning än vad träningen med ICD-9 gjorde. Trots detta minskade precision av modellen när man den tränades med SNOMED CT hierarkin medan skillnaderna i precision när man tränade hierarkiskt med ICD-9 jämfört med vanligt inte var statistiskt signifikanta. Ökningen i recall kompenserade inte för minskningen i precision när modellen tränades med SNOMED CT hierarkien som man kan see på F1-score vilket är det harmoniska medelvärdet av de recall och precision. Slutsatserna man kan dra från de här resultaten är att en mer detaljerad hierarki kommer att öka recall mer än en mindre detaljerad hierarki ökar recall. Trots detta kommer den totala prestandan, som mäts av F1-score, försämras med en mer detaljerad hierarki eftersom att recall minskar mer än vad precision ökar. En mindre detaljerad hierarki i träning kommer bibehålla precision så att dens totala prestandan förbättras.
17

Application and Evaluation of Unified Medical Language System Resources to Facilitate Patient Information Acquisition through Enhanced Vocabulary Coverage

Mills, Eric M. III 26 April 1998 (has links)
Two broad themes of this research are, 1) to develop a generalized framework for studying the process of patient information acquisition and 2) to develop and evaluate automated techniques for identifying domain-specific vocabulary terms contained in, or missing from, a standardized controlled medical vocabulary with emphasis on those terms necessary for representing the canine physical examination. A generalized framework for studying the process of patient information acquisition is addressed by the Patient Information Acquisition Model (PIAM). PIAM illustrates the decision-to-perception chain which links a clinician's decision to collect information, either personally or through another, with the perception of the resulting information. PIAM serves as a framework for a systematic approach to identifying causes of missing or inaccurate information. The vocabulary studies in this research were conducted using free-text with two objectives in mind, 1) develop and evaluate automated techniques for identifying canine physical examination terms contained in the Systematized Nomenclature of Medicine and Veterinary Medicine (SNOMED), version 3.3 and 2) develop and evaluate automated techniques for identifying canine physical examination terms not documented in the 1997 release of the Unified Medical Language System (UMLS). Two lexical matching techniques for identifying SNOMED concepts contained in free-text were evaluated, 1) lexical matching using SNOMED version 3.3 terms alone and 2) Metathesaurus-enhanced lexical matching. Metathesaurus-enhanced lexical matching utilized non-SNOMED terms from the source vocabularies of the Metathesaurus of the Unified Medical Language System to identify SNOMED concepts in free-text using links among synonymous terms contained in the Metathesaurus. Explicit synonym disagreement between the Metathesaurus and its source vocabularies was identified during the Metathesaurus-enhanced lexical matching studies. Explicit synonym disagreement occurs, 1) when terms within a single concept group in a source vocabulary are mapped to multiple Metathesaurus concepts, and 2) when terms from multiple concept groups in a source vocabulary are mapped to a single Metathesaurus concept. Five causes of explicit synonym disagreement between a source vocabulary and the Metathesaurus were identified in this research, 1) errors within a source vocabulary, 2) errors within the Metathesaurus, 3) errors in mapping between the Metathesaurus and a source vocabulary, 4) systematic differences in vocabulary management between the Metathesaurus and a source vocabulary, and 5) differences regarding synonymy among domain experts, based on perspective or context. Three approaches to reconciling differences among domain experts are proposed. First, document which terms are involved. Second, provide a mechanism for selecting either vocabulary-based or Metathesaurus-based synonymy. Third, assign a "basis of synonymy" attribute to each set of synonymous terms in order to identify the perspective or context of synonymy explicitly. The second objective, identifying canine physical examination terms not documented in the 1997 release of the UMLS was accomplished using lexical matching, domain-specific free-text, the Metathesaurus and the SPECIALIST Lexicon. Terms contained in the Metathesaurus and SPECIALIST Lexicon were removed from free-text and the remaining character strings were presented to domain experts along with the original sections of text for manual review. / Ph. D.
18

Formalizing biomedical concepts from textual definitions

Petrova, Alina, Ma, Yue, Tsatsaronis, George, Kissa, Maria, Distel, Felix, Baader, Franz, Schroeder, Michael 07 January 2016 (has links) (PDF)
BACKGROUND: Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. RESULTS: We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. CONCLUSIONS: The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
19

Formalizing biomedical concepts from textual definitions

Tsatsaronis, George, Ma, Yue, Petrova, Alina, Kissa, Maria, Distel, Felix, Baader , Franz, Schroeder, Michael 04 January 2016 (has links) (PDF)
Background Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. Results We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations’ domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations’ domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. Conclusions The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.
20

Formalizing biomedical concepts from textual definitions

Petrova, Alina, Ma, Yue, Tsatsaronis, George, Kissa, Maria, Distel, Felix, Baader, Franz, Schroeder, Michael 07 January 2016 (has links)
BACKGROUND: Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. RESULTS: We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MeSH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. CONCLUSIONS: The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL.

Page generated in 0.0303 seconds