241 |
Semi-Supervised Learning for Electronic Phenotyping in Support of Precision MedicineHalpern, Yonatan 15 December 2016 (has links)
<p> Medical informatics plays an important role in precision medicine, delivering the right information to the right person, at the right time. With the introduction and widespread adoption of electronic medical records, in the United States and world-wide, there is now a tremendous amount of health data available for analysis.</p><p> Electronic record phenotyping refers to the task of determining, from an electronic medical record entry, a concise descriptor of the patient, comprising of their medical history, current problems, presentation, etc. In inferring such a phenotype descriptor from the record, a computer, in a sense, "understands'' the relevant parts of the record. These phenotypes can then be used in downstream applications such as cohort selection for retrospective studies, real-time clinical decision support, contextual displays, intelligent search, and precise alerting mechanisms.</p><p> We are faced with three main challenges:</p><p> First, the unstructured and incomplete nature of the data recorded in the electronic medical records requires special attention. Relevant information can be missing or written in an obscure way that the computer does not understand. </p><p> Second, the scale of the data makes it important to develop efficient methods at all steps of the machine learning pipeline, including data collection and labeling, model learning and inference.</p><p> Third, large parts of medicine are well understood by health professionals. How do we combine the expert knowledge of specialists with the statistical insights from the electronic medical record?</p><p> Probabilistic graphical models such as Bayesian networks provide a useful abstraction for quantifying uncertainty and describing complex dependencies in data. Although significant progress has been made over the last decade on approximate inference algorithms and structure learning from complete data, learning models with incomplete data remains one of machine learning’s most challenging problems. How can we model the effects of latent variables that are not directly observed?</p><p> The first part of the thesis presents two different structural conditions under which learning with latent variables is computationally tractable. The first is the "anchored'' condition, where every latent variable has at least one child that is not shared by any other parent. The second is the "singly-coupled'' condition, where every latent variable is connected to at least three children that satisfy conditional independence (possibly after transforming the data). </p><p> Variables that satisfy these conditions can be specified by an expert without requiring that the entire structure or its parameters be specified, allowing for effective use of human expertise and making room for statistical learning to do some of the heavy lifting. For both the anchored and singly-coupled conditions, practical algorithms are presented.</p><p> The second part of the thesis describes real-life applications using the anchored condition for electronic phenotyping. A human-in-the-loop learning system and a functioning emergency informatics system for real-time extraction of important clinical variables are described and evaluated.</p><p> The algorithms and discussion presented here were developed for the purpose of improving healthcare, but are much more widely applicable, dealing with the very basic questions of identifiability and learning models with latent variables - a problem that lies at the very heart of the natural and social sciences.</p>
|
242 |
An evolutionary method for training autoencoders for deep learning networksLander, Sean 18 November 2016 (has links)
<p> Introduced in 2006, Deep Learning has made large strides in both supervised an unsupervised learning. The abilities of Deep Learning have been shown to beat both generic and highly specialized classification and clustering techniques with little change to the underlying concept of a multi-layer perceptron. Though this has caused a resurgence of interest in neural networks, many of the drawbacks and pitfalls of such systems have yet to be addressed after nearly 30 years: speed of training, local minima and manual testing of hyper-parameters.</p><p> In this thesis we propose using an evolutionary technique in order to work toward solving these issues and increase the overall quality and abilities of Deep Learning Networks. In the evolution of a population of autoencoders for input reconstruction, we are able to abstract multiple features for each autoencoder in the form of hidden nodes, scoring the autoencoders based on their ability to reconstruct their input, and finally selecting autoencoders for crossover and mutation with hidden nodes as the chromosome. In this way we are able to not only quickly find optimal abstracted feature sets but also optimize the structure of the autoencoder to match the features being selected. This also allows us to experiment with different training methods in respect to data partitioning and selection, reducing overall training time drastically for large and complex datasets. This proposed method allows even large datasets to be trained quickly and efficiently with little manual parameter choice required by the user, leading to faster, more accurate creation of Deep Learning Networks.</p>
|
243 |
The Stanford-Binet, Form L-M, and the Wechsler Intelligence Scale for Children: A Comparative Study Utilizing Cultural-Familial and Undifferentiated Mental RetardatesStone, John S. 08 1900 (has links)
The purpose of this study was to compare the results obtained on the Stanford-Binet, Form L-M, and the Wechsler Intelligence Scale for Children for a group of cultural-familial and undifferentiated mental retardates. Such as study should provide some evidence as to whether the two instruments adequately measure similar abilities and whether the IQ's obtained from one can be considered comparable with the IQ's obtained from the other.
|
244 |
Comparison of Group and Individual Methods of Presenting Baldwin's Social Expectations ScalePitts, Emily C. 05 1900 (has links)
Forty Ss from introductory psychology classes participated in a study to determine whether or not the investigator's group Social Expectations Scale (SES) was a useful research instrument and to determine whether or not intelligence was a factor determining the fit of a particular cognitive model, the BSE, to the social expectations of Ss as measured by the SES.
|
245 |
A Comparison of the California Test of Mental Maturity and the Wechsler Intelligence Scale for Children in Four Clinical Groups of School ChildrenNichols, Leslie A. 08 1900 (has links)
The primary problem of this study was to compare the Wechsler Intelligence Scale for Children and the California Test of Mental Maturity S-F, 1962 Revision, in order to determine whether the two instruments were interchangeable with respect to intelligence quotients for a school-clinical population.
|
246 |
Ten Years After 9/11: the Structure and Use of Intelligence Units in Local PolicingHollier, Michael P. 12 1900 (has links)
The events of September 11, 2001 marked a paradigm shift in the strategy within all levels of law enforcement in the United States. Intelligence became the watchword of the day and with it, the movement to incorporate strategic and tactical information in daily policing. Yet while the philosophy was clear, the method and manner to which agencies were left to achieve these goals was much less designed. The federal government allocated funds to assist help agencies incorporate an intelligence function in their daily operations but which agencies and to what degree remains unclear even today. This study seeks to determine the current state of use of intelligence in municipal law enforcement agencies in the State of Texas ten years after 9/11. Through use of a survey, it assesses the frequency of use of intelligence units in local police departments in the State of Texas, identifies commonalities in their structure, and determines the state of their effectiveness.
|
247 |
Reverse engineering an active eyeSchmidt-Cornelius, Hanson January 2002 (has links)
No description available.
|
248 |
An ontology model supporting multiple ontologies for knowledge sharingTamma, Valentina A. M. January 2001 (has links)
No description available.
|
249 |
L’intelligence organisationnelle : une nouvelle perspective pour l’amélioration de la capacité d’absorption de l’organisation / Organizational intelligence : a new perspective for improved the organization absorptive capacitySlama, Boulbeba 03 December 2012 (has links)
Dans un environnement en pleine mutation, la gestion de l’information et de la connaissance est un immense défi pour l’organisation pour la création de la valeur, le maintien et le développement d’un avantage compétitif soutenable. Cependant, les rapides changements de l’environnement, des technologies et des règles de la concurrence ont aggravé les problèmes de l’organisation face à la réalisation de ces objectifs. Ainsi, face au rôle important joué par le concept de l’information et de la connaissance, des nombreuses entreprises se sont vu obligées, durant ces vingt dernières années, d'abandonner les vieux modèles et d'adopter de nouvelles démarches capables d’absorber les informations et les connaissances dispersées. Le processus d’absorption de l’information interne et externe est devenu un élément essentiel de la performance pour les firmes qui veulent s’adapter aux changements dans un environnement concurrentiel. En dépit de l’abondance de la littérature sur la capacité d’absorption, une lacune méthodologique et une ambiguïté théorique sur la spécification de sa définition et sa « dimensialisation » perdurent dans la plupart des études. L'objectif de cette recherche est de contribuer à la littérature sur la capacité d'absorption à travers la création et la validation de nouvelles perspectives ou mesures, justifié par une analyse approfondie de la littérature à travers l’intelligence économique et le knowledge management. A partir d’un modèle conceptuel nous vérifions les propriétés psychométriques de nos construits sur des données de 54 entreprises françaises. Les résultats de l’étude confirment la validité des échelles proposées. Ils soutiennent ainsi que leur consolidation comme un instrument utilisable permettant la mesure de la capacité d’absorption. / In a changing environment, management of information and knowledge is a huge challenge for the organization to value creation, maintenance and development of a sustainable competitive advantage. However, rapid changes in the environment, technology and competition rules have exacerbated the problems facing the organization to achieve these goals. Thus, given the important role played by the concept of information and knowledge, many companies have been forced, over the past twenty years, to abandon old models and adopt new approaches able to absorb information and knowledge scattered. The absorption process of internal and external information has become an essential part of the performance for firms that want to adapt to changes in the competitive environment. Despite the abundance of literature on absorptive capacity, a methodological lack and theoretical ambiguity about specifying its definition and "dimensialisation" persist in most studies. The objective of this research is to contribute to the literature on absorptive capacity through the creation and validation of new perspectives or measures, supported by a thorough analysis of the literature through business intelligence and knowledge management. From a conceptual model we test the psychometric properties of our constructed on data from 54 French companies. The results of the study confirm the validity of the proposed scales. So, they argue their consolidation and used as an instrument for measuring the absorption capacity.
|
250 |
Serious Games pour la e-Santé : application à la formation des médecins généralistesGuo, Jing 16 September 2016 (has links) (PDF)
Les Jeux Sérieux (Serious Games) sont des jeux vidéo qui sont conçus avec un objectif premier qui n’est pas le divertissement. Les jeux sérieux sont de plus en plus utilisés dans le domaine de la santé en tant qu’outil éducatif dans le cadre de la formation à la médecine, ou pour aider au rétablissement des patients. Dans cette thèse, nous nous intéressons à la conception d’un jeu sérieux pour la formation des médecins généralistes, en nous intéressant tout particulièrement à l’apprentissage des compétences communicationnelles et interpersonnelles qui jouent un rôle très important dans le métier de médecin, et qui sont assez peu présentes dans les programmes des cursus de formation. Nous nous intéressons en particulier aux méthodologies de conception d’un tel jeu qui doit délivrer un contenu utilitaire tout en équilibrant apprentissage et divertissement. Afin de mener ce travail, nous présentons dans la première partie de la thèse une analyse des méthodes existantes de conception de jeux sérieux en étudiant en particulier les mécanismes permettant de motiver le joueur ainsi que les principaux design patterns de conception. Nous expliquons en quoi les jeux sérieux nécessitent une architecture particulière dont la principale caractéristique est de séparer clairement les concepts nécessaires à l’apprentissage de ceux liés à l’aspect ludique. Nous proposons ensuite une modélisation de la consultation médicale qui en plus de rendre compte du processus métier auquel elle correspond, permet de représenter les différents éléments nécessaires à l’implémentation algorithmique d’un moteur de dialogue entre un joueur et un patient virtuel. Cette modélisation utilise les ontologies pour décrire les connaissances impliquées et nous montrons comment un scénario de consultation médicale peut se décrire en termes d’instances de ces ontologies. Ces ontologies incluent quatre niveaux qui décrivent le profil du patient, le résultat de consultation, le scénario et la phrase. Cette description est accessible aux experts formateurs qui disposent donc d’un outil leur permettant de définir les objectifs pédagogiques que le joueur-apprenant doit atteindre au cours de la simulation. Ces analyses sont enfin appliquées au cas de la consultation médicale et nous décrivons l’architecture d’un jeu que nous avons conçu appelé AgileDoctor. Ce jeu a pour objectif de permettre à un apprenant de jouer le rôle d’un médecin qui mène des consultations médicales en accueillant des patients aux profils divers.
|
Page generated in 0.0515 seconds