• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 531
  • 531
  • 148
  • 142
  • 125
  • 124
  • 119
  • 114
  • 106
  • 101
  • 97
  • 84
  • 81
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Modélisation des signes dans les ontologies biomédicales pour l'aide au diagnostic. / Representation of the signs in the biomedical ontologies for the help to the diagnosis.

Donfack Guefack, Pierre Sidoine V. 20 December 2013 (has links)
Introduction : Établir un diagnostic médical fiable requiert l’identification de la maladie d’un patient sur la base de l’observation de ses signes et symptômes. Par ailleurs, les ontologies constituent un formalisme adéquat et performant de représentation des connaissances biomédicales. Cependant, les ontologies classiques ne permettent pas de représenter les connaissances liées au processus du diagnostic médical : connaissances probabilistes et connaissances imprécises et vagues. Matériel et méthodes : Nous proposons des méthodes générales de représentation des connaissances afin de construire des ontologies adaptées au diagnostic médical. Ces méthodes permettent de représenter : (a) Les connaissances imprécises et vagues par la discrétisation des concepts (définition de plusieurs catégories distinctes à l’aide de valeurs seuils ou en représentant les différentes modalités possibles). (b) Les connaissances probabilistes (les sensibilités et les spécificités des signes pour les maladies, et les prévalences des maladies pour une population donnée) par la réification des relations ayant des arités supérieures à 2. (c) Les signes absents par des relations et (d) les connaissances liées au processus du diagnostic médical par des règles SWRL. Un moteur d’inférences abductif et probabiliste a été conçu et développé. Ces méthodes ont été testées à l’aide de dossiers patients réels. Résultats : Ces méthodes ont été appliquées à trois domaines (les maladies plasmocytaires, les urgences odontologiques et les lésions traumatiques du genou) pour lesquels des modèles ontologiques ont été élaborés. L’évaluation a permis de mesurer un taux moyen de 89,34% de résultats corrects. Discussion-Conclusion : Ces méthodes permettent d’avoir un modèle unique utilisable dans le cadre des raisonnements abductif et probabiliste, contrairement aux modèles proposés par : (a) Fenz qui n’intègre que le mode de raisonnement probabiliste et (b) García-crespo qui exprime les probabilités hors du modèle ontologique. L’utilisation d’un tel système nécessitera au préalable son intégration dans le système d’information hospitalier pour exploiter automatiquement les informations du dossier patient électronique. Cette intégration pourrait être facilitée par l’utilisation de l’ontologie du système. / Introduction: Making a reliable medical diagnosis requires the identification of the patient’s disease based on the observation of signs. Moreover, ontologies provide an adequate and efficient formalism for medical knowledge representation. However, classical ontologies do not allow representing knowledge associated with medical reasoning such as probabilistic, imprecise, or vague knowledge. Material and methods: In the current work, general knowledge representation methods are proposed. They aim at building ontologies fitting to medical diagnosis. They allow to represent: (a) imprecise or vague knowledge by discretizing concepts (definition of several distinct categories thanks to threshold values or by representing the various possible modalities), (b) probabilistic knowledge (sensitivity, specificity and prevalence) by reification of relations of arity greater than 2, (c) absent signs by relations and (d) medical reasoning and reasoning on the absent signs by SWRL rules. An abductive reasoning engine and a probabilistic reasoning engine were designed and implemented. The methods were evaluated by use of real patient records. Results: These methods were applied to three domains (the plasma cell diseases, the dental emergencies and traumatic knee injuries) for which the ontological models were developed. The average rate of correct diagnosis was 89.34 %. Discussion-Conclusion: In contrast with other methods proposed by Fenz and García-crespo, the proposed methods allow to have a unique model which can be used both for abductive and probabilistic reasoning. The use of such a system will require beforehand its integration in the hospital information system for the automatic exploitation of the electronic patient record. This integration might be made easier by the use of the ontology on which the system is based.
502

n-TARP: A Random Projection based Method for Supervised and Unsupervised Machine Learning in High-dimensions with Application to Educational Data Analysis

Yellamraju Tarun (6630578) 11 June 2019 (has links)
Analyzing the structure of a dataset is a challenging problem in high-dimensions as the volume of the space increases at an exponential rate and typically, data becomes sparse in this high-dimensional space. This poses a significant challenge to machine learning methods which rely on exploiting structures underlying data to make meaningful inferences. This dissertation proposes the <i>n</i>-TARP method as a building block for high-dimensional data analysis, in both supervised and unsupervised scenarios.<div><br></div><div>The basic element, <i>n</i>-TARP, consists of a random projection framework to transform high-dimensional data to one-dimensional data in a manner that yields point separations in the projected space. The point separation can be tuned to reflect classes in supervised scenarios and clusters in unsupervised scenarios. The <i>n</i>-TARP method finds linear separations in high-dimensional data. This basic unit can be used repeatedly to find a variety of structures. It can be arranged in a hierarchical structure like a tree, which increases the model complexity, flexibility and discriminating power. Feature space extensions combined with <i>n</i>-TARP can also be used to investigate non-linear separations in high-dimensional data.<br></div><div><br></div><div>The application of <i>n</i>-TARP to both supervised and unsupervised problems is investigated in this dissertation. In the supervised scenario, a sequence of <i>n</i>-TARP based classifiers with increasing complexity is considered. The point separations are measured by classification metrics like accuracy, Gini impurity or entropy. The performance of these classifiers on image classification tasks is studied. This study provides an interesting insight into the working of classification methods. The sequence of <i>n</i>-TARP classifiers yields benchmark curves that put in context the accuracy and complexity of other classification methods for a given dataset. The benchmark curves are parameterized by classification error and computational cost to define a benchmarking plane. This framework splits this plane into regions of "positive-gain" and "negative-gain" which provide context for the performance and effectiveness of other classification methods. The asymptotes of benchmark curves are shown to be optimal (i.e. at Bayes Error) in some cases (Theorem 2.5.2).<br></div><div><br></div><div>In the unsupervised scenario, the <i>n</i>-TARP method highlights the existence of many different clustering structures in a dataset. However, not all structures present are statistically meaningful. This issue is amplified when the dataset is small, as random events may yield sample sets that exhibit separations that are not present in the distribution of the data. Thus, statistical validation is an important step in data analysis, especially in high-dimensions. However, in order to statistically validate results, often an exponentially increasing number of data samples are required as the dimensions increase. The proposed <i>n</i>-TARP method circumvents this challenge by evaluating statistical significance in the one-dimensional space of data projections. The <i>n</i>-TARP framework also results in several different statistically valid instances of point separation into clusters, as opposed to a unique "best" separation, which leads to a distribution of clusters induced by the random projection process.<br></div><div><br></div><div>The distributions of clusters resulting from <i>n</i>-TARP are studied. This dissertation focuses on small sample high-dimensional problems. A large number of distinct clusters are found, which are statistically validated. The distribution of clusters is studied as the dimensionality of the problem evolves through the extension of the feature space using monomial terms of increasing degree in the original features, which corresponds to investigating non-linear point separations in the projection space.<br></div><div><br></div><div>A statistical framework is introduced to detect patterns of dependence between the clusters formed with the features (predictors) and a chosen outcome (response) in the data that is not used by the clustering method. This framework is designed to detect the existence of a relationship between the predictors and response. This framework can also serve as an alternative cluster validation tool.<br></div><div><br></div><div>The concepts and methods developed in this dissertation are applied to a real world data analysis problem in Engineering Education. Specifically, engineering students' Habits of Mind are analyzed. The data at hand is qualitative, in the form of text, equations and figures. To use the <i>n</i>-TARP based analysis method, the source data must be transformed into quantitative data (vectors). This is done by modeling it as a random process based on the theoretical framework defined by a rubric. Since the number of students is small, this problem falls into the small sample high-dimensions scenario. The <i>n</i>-TARP clustering method is used to find groups within this data in a statistically valid manner. The resulting clusters are analyzed in the context of education to determine what is represented by the identified clusters. The dependence of student performance indicators like the course grade on the clusters formed with <i>n</i>-TARP are studied in the pattern dependence framework, and the observed effect is statistically validated. The data obtained suggests the presence of a large variety of different patterns of Habits of Mind among students, many of which are associated with significant grade differences. In particular, the course grade is found to be dependent on at least two Habits of Mind: "computation and estimation" and "values and attitudes."<br></div>
503

As contribui??es da comunica??o e do conhecimento da Ci?ncia da Informa??o para a an?lise de requisitos no desenvolvimento de software / The contributions of communication and knowledge of information science for requirements analysis at the software development

Pinto Filho, Antonio Tupinamb? Timbira de Oliveira 16 August 2005 (has links)
Made available in DSpace on 2016-04-04T18:36:41Z (GMT). No. of bitstreams: 1 Antonio Tupinamba Timbira de Oliveira Pinto Filho.pdf: 817776 bytes, checksum: cf7053d56060db0c2436ba88b00ace8f (MD5) Previous issue date: 2005-08-16 / This research intends to understand, in the definition system phase the valuable, prospective and formalization problems of information. The communication aspects are the focus, with their phases, between the main actors of the software development project in the requirement analysis step: analysts and users. A model of requirement analyses which considerer the communication and Information Science knowledge issues in the formalization of the system specifications are presented. This model helps to identify the important aspects, in the information user domain, for externalized and formalized to reach a requirement definition as near as possible of reality. / Esta pesquisa busca compreender, na etapa de defini??o do sistema, os problemas de valoriza??o, prospec??o e formaliza??o da informa??o, focando os aspectos da comunica??o, e suas diversas etapas, entre os atores principais de um projeto de desenvolvimento de software na fase de an?lise de requisitos: analista e usu?rio. Um modelo de an?lise de requisitos que considera as quest?es comunicacionais e do conhecimento da Ci?ncia da Informa??o, na formaliza??o das especifica??es do sistema ? apresentado. Este modelo ajuda a identificar os aspectos mais importantes, que est?o na esfera do dom?nio de informa??o do usu?rio, de modo a serem externalizados e formalizados para que se consiga chegar a uma defini??o de requisitos mais pr?xima da realidade.
504

SweetDeal: Representing Agent Contracts With Exceptions using XML Rules, Ontologies, and Process Descriptions

GROSOF, BENJAMIN, POON, TERRENCE C. 16 September 2003 (has links)
SweetDeal is a rule-based approach to representation of business contracts that enables software agents to create, evaluate, negotiate, and execute contracts with substantial automation and modularity. It builds upon the situated courteous logic programs knowledge representation in RuleML, the emerging standard for Semantic Web XML rules. Here, we newly extend the SweetDeal approach by also incorporating process knowledge descriptions whose ontologies are represented in DAML+OIL (the close predecessor of W3C's OWL, the emerging standard for Semantic Web ontologies), thereby enabling more complex contracts with behavioral provisions, especially for handling exception conditions (e.g., late delivery or non-payment) that might arise during the execution of the contract. This provides a foundation for representing and automating deals about services – in particular, about Web Services, so as to help search, select, and compose them. We give a detailed application scenario of late delivery in manufacturing supply chain management (SCM). In doing so, we draw upon our new formalization of process ontology knowledge from the MIT Process Handbook, a large, previously-existing repository used by practical industrial process designers. Our system is the first to combine emerging Semantic Web standards for knowledge representation of rules (RuleML) with ontologies (DAML+OIL/OWL) with each other, and moreover for a practical e-business application domain, and further to do so with process knowledge. This also newly fleshes out the evolving concept of Semantic Web Services. A prototype (soon public) i
505

Modelos de representación de arquetipos en sistemas de información sanitarios.

Menárguez Tortosa, Marcos 29 May 2013 (has links)
En esta tesis doctoral se presenta una propuesta de representación ontológica de la arquitectura de modelo dual de la Historia Clínica Electrónica. La representación de arquetipos con el lenguaje OWL ha permitido: 1) la definición e implementación de un método de evaluación de la calidad de arquetipos basado en técnicas de razonamiento, 2) la definición de una metodología y un marco de trabajo para la interoperabilidad de modelos de contenido clínico, y 3) la aplicación de técnicas y herramientas de desarrollo de software dirigido por modelos para la generación automática de sistemas de información sanitarios a partir de arquetipos. / In this doctoral thesis an ontology-based approach for representing the dual model architecture of Electronic Health Record is presented. The representation of archetypes in OWL allows: 1) the definition and implementation of a quality evaluation method for archetypes based on reasoning techniques, 2) the definition of a methodology and a framework for the interoperability of clinical content models, and 3) applying model driven software development techniques and tools for the automatic generation of health information systems from archetypes.
506

Semantics and Knowledge Engineering for Requirements and Synthesis in Conceptual Design: Towards the Automation of Requirements Clarification and the Synthesis of Conceptual Design Solutions

Christophe, François 27 July 2012 (has links) (PDF)
This thesis suggests the use of tools from the disciplines of Computational Linguistics and Knowledge Representation with the idea that such tools would enable the partial automation of two processes of Conceptual Design: the analysis of Requirements and the synthesis of concepts of solution. The viewpoint on Conceptual Design developed in this research is based on the systematic methodologies developed in the literature. The evolution of these methodologies provided precise description of the tasks to be achieved by the designing team in order to achieve successful design. Therefore, the argument of this thesis is that it is possible to create computer models of some of these tasks in order to partially automate the refinement of the design problem and the exploration of the design space. In Requirements Engineering, the definition of requirements consists in identifying the needs of various stakeholders and formalizing it into design speciႡcations. During this task, designers face the problem of having to deal with individuals from different expertise, expressing their needs with different levels of clarity. This research tackles this issue with requirements expressed in natural language (in this case in English). The analysis of needs is realised from different linguistic levels: lexical, syntactic and semantic. The lexical level deals with the meaning of words of a language. Syntactic analysis provides the construction of the sentence in language, i.e. the grammar of a language. The semantic level aims at Ⴁnding about the specific meaning of words in the context of a sentence. This research makes extensive use of a semantic atlas based on the concept of clique from graph theory. Such concept enables the computation of distances between a word and its synonyms. Additionally, a methodology and a metric of similarity was defined for clarifying requirements at syntactic, lexical and semantic levels. This methodology integrates tools from research collaborators. In the synthesis process, a Knowledge Representation of the necessary concepts for enabling computers to create concepts of solution was developed. Such, concepts are: function, input/output Ⴂow, generic organs, behaviour, components. The semantic atlas is also used at that stage to enable a mapping between functions and their solutions. It works as the interface between the concepts of this Knowledge Representation.
507

A model of pulsed signals between 100MHz and 100GHz in a Knowldege-Based Environment

Fitch, Phillip January 2009 (has links)
The thesis describes a model of electromagnetic pulses from sources between 100MHz and 100GHz for use in the development of Knowledge-Based systems. Each pulse, including its modulations, is described as would be seen by a definable receiving system. The model has been validated against a range of Knowledge-Based systems including a neural network, a Learning systems and an Expert system.
508

Modelo para estruturação e representação de diálogos em fórum de discussão

Buiar, José Antônio 16 October 2012 (has links)
A adaptação dos sistemas tradicionais de ensino presencial para o ambiente de ensino a distância introduz diversas mudanças na práxis escolar. Com a ausência do contato direto entre educador e educandos, surge a necessidade de utilização de artefatos tecnológicos que substituam a interação direta. O fórum de discussão é um desses artefatos tecnológicos. Ele possui a característica de ser um elemento catalisador da comunicação entre os envolvidos e pode ser um importante instrumento no processo educacional. Contudo, a natureza não estruturada das mensagens textuais de um fórum dificulta o seu uso como instrumento na avaliação individual do aluno. A análise e qualificação do conteúdo das mensagens armazenadas em um fórum representa um grande desafio para o instrutor. A ausência de uma estrutura formal de representação dos conceitos, crenças e idéias dos alunos poderia ser apontado como um dos elementos que contribuem para esse desafio. A proposta desta pesquisa é o desenvolvimento de um modelo que permita a estruturação e representação das mensagens de um fórum. Essa estruturação considera três aspectos da mensagem: i) os conceitos apresentados, ii) quem os apresentou e finalmente iii) quando esses conceitos foram apresentados. Para validar esse modelo, um programa de computador foi desenvolvido e testado em um fórum do ambiente virtual Moodle. O conceitos desenvolvidos para o Modelo de Estruturação e Representação das Mensagens do Fórum foram utilizados no desenvolvimento desse programa de computador. Por meio desse modelo de estruturação e representação das mensagens, um mapa ou guia é gerado. Esse mapa ou guia pode ser acessado pelo professor ou instrutor. Esse novo recurso desenvolvido, pode ser utilizado como uma ferramenta de apoio à análise ou avaliação do fórum do ambiente Moodle como um todo ou de cada participação individual do aluno. / The traditional learning practices adaptation to the distance learning introduces several changes in school practice. Since in distance learning the direct contact between educators and students does not exist, new technological artifacts become necessary in order to replace direct interaction. One of these artifacts is the discussion forum, which works as a catalyzer element of the communication between involved ones and can be an important tool in the educational process. Nevertheless, non-structured nature of text messages on a forum hampers its use as a tool in individual student assessment. Analysis and qualification of message contents stored on a forum represents an important challenge for instructors. The absence of a formal representation of concepts, ideas and beliefs from students could be designated as one of the factors that make this challenge even harder. This research proposes the development of a model that allows the messages on a forum can be structured and represented. This structuration considers three message aspects: i) presented concepts, ii) who has presented it, and iii) when concepts have been presented. As a means to validate this model, a computer program was developed and tested in a Moodle virtual environment forum. The concepts developed to the Structuration and Representation of the Forum Messages Model were used on this computer program development. Through the use of this model a map or guide is generated. This map or guide can be accessed by the professor or instructor. This new feature can be used as a support tool to analysis or evaluation of a Moodle forum environment.
509

Modelo para estruturação e representação de diálogos em fórum de discussão

Buiar, José Antônio 16 October 2012 (has links)
A adaptação dos sistemas tradicionais de ensino presencial para o ambiente de ensino a distância introduz diversas mudanças na práxis escolar. Com a ausência do contato direto entre educador e educandos, surge a necessidade de utilização de artefatos tecnológicos que substituam a interação direta. O fórum de discussão é um desses artefatos tecnológicos. Ele possui a característica de ser um elemento catalisador da comunicação entre os envolvidos e pode ser um importante instrumento no processo educacional. Contudo, a natureza não estruturada das mensagens textuais de um fórum dificulta o seu uso como instrumento na avaliação individual do aluno. A análise e qualificação do conteúdo das mensagens armazenadas em um fórum representa um grande desafio para o instrutor. A ausência de uma estrutura formal de representação dos conceitos, crenças e idéias dos alunos poderia ser apontado como um dos elementos que contribuem para esse desafio. A proposta desta pesquisa é o desenvolvimento de um modelo que permita a estruturação e representação das mensagens de um fórum. Essa estruturação considera três aspectos da mensagem: i) os conceitos apresentados, ii) quem os apresentou e finalmente iii) quando esses conceitos foram apresentados. Para validar esse modelo, um programa de computador foi desenvolvido e testado em um fórum do ambiente virtual Moodle. O conceitos desenvolvidos para o Modelo de Estruturação e Representação das Mensagens do Fórum foram utilizados no desenvolvimento desse programa de computador. Por meio desse modelo de estruturação e representação das mensagens, um mapa ou guia é gerado. Esse mapa ou guia pode ser acessado pelo professor ou instrutor. Esse novo recurso desenvolvido, pode ser utilizado como uma ferramenta de apoio à análise ou avaliação do fórum do ambiente Moodle como um todo ou de cada participação individual do aluno. / The traditional learning practices adaptation to the distance learning introduces several changes in school practice. Since in distance learning the direct contact between educators and students does not exist, new technological artifacts become necessary in order to replace direct interaction. One of these artifacts is the discussion forum, which works as a catalyzer element of the communication between involved ones and can be an important tool in the educational process. Nevertheless, non-structured nature of text messages on a forum hampers its use as a tool in individual student assessment. Analysis and qualification of message contents stored on a forum represents an important challenge for instructors. The absence of a formal representation of concepts, ideas and beliefs from students could be designated as one of the factors that make this challenge even harder. This research proposes the development of a model that allows the messages on a forum can be structured and represented. This structuration considers three message aspects: i) presented concepts, ii) who has presented it, and iii) when concepts have been presented. As a means to validate this model, a computer program was developed and tested in a Moodle virtual environment forum. The concepts developed to the Structuration and Representation of the Forum Messages Model were used on this computer program development. Through the use of this model a map or guide is generated. This map or guide can be accessed by the professor or instructor. This new feature can be used as a support tool to analysis or evaluation of a Moodle forum environment.
510

A new framework for a technological perspective of knowledge management

Botha, Antonie Christoffel 26 June 2008 (has links)
Rapid change is a defining characteristic of our modern society. This has huge impact on society, governments, and businesses. Businesses are forced to fundamentally transform themselves to survive in a challenging economy. Transformation implies change in the way business is conducted, in the way people perform their contribution to the organisation, and in the way the organisation perceives and manages its vital assets – which increasingly are built around the key assets of intellectual capital and knowledge. The latest management tool and realisation of how to respond to the challenges of the economy in the new millennium, is the idea of "knowledge management" (KM). In this study we have focused on synthesising the many confusing points of view about the subject area, such as: <ul><li> a. different focus points or perspectives; </li><li> b. different definitions and positioning of the subject; as well as</li><li> c. a bewildering number of definitions of what knowledge is and what KM entails.</li></ul> There exists a too blurred distinction in popular-magazine-like sources about this area between subjects and concepts such as: knowledge versus information versus data; the difference between information management and knowledge management; tools available to tackle the issues in this field of study and practice; and the role technology plays versus the huge hype from some journalists and within the vendor community. Today there appears to be a lack of a coherent set of frameworks to abstract, comprehend, and explain this subject area; let alone to build successful systems and technologies with which to apply KM. The study is comprised of two major parts:<ul><li> 1. In the first part the study investigates the concepts, elements, drivers, and challenges related to KM. A set of models for comprehending these issues and notions is contributed as we considered intellectual capital, organizational learning, communities of practice, and best practices. </li><li> 2. The second part focuses on the technology perspective of KM. Although KM is primarily concerned with non-technical issues this study concentrates on the technical issues and challenges. A new technology framework for KM is proposed to position and relate the different KM technologies as well as the two key applications of KM, namely knowledge portals and knowledge discovery (including text mining). </li></ul> It is concluded that KM and related concepts and notions need to be understood firmly as well as effectively positioned and employed to support the modern business organisation in its quest to survive and grow. The main thesis is that KM technology is a necessary but insufficient prerequisite and a key enabler for successful KM in a rapidly changing business environment. / Thesis (PhD (Computer Science))--University of Pretoria, 2010. / Computer Science / unrestricted

Page generated in 0.1297 seconds