• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5500
  • 1071
  • 768
  • 625
  • 541
  • 355
  • 143
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 82
  • Tagged with
  • 11471
  • 6028
  • 2537
  • 1977
  • 1672
  • 1419
  • 1340
  • 1313
  • 1215
  • 1132
  • 1074
  • 1035
  • 1008
  • 886
  • 876
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

An evolutionary method for training autoencoders for deep learning networks

Lander, Sean 18 November 2016 (has links)
<p> Introduced in 2006, Deep Learning has made large strides in both supervised an unsupervised learning. The abilities of Deep Learning have been shown to beat both generic and highly specialized classification and clustering techniques with little change to the underlying concept of a multi-layer perceptron. Though this has caused a resurgence of interest in neural networks, many of the drawbacks and pitfalls of such systems have yet to be addressed after nearly 30 years: speed of training, local minima and manual testing of hyper-parameters.</p><p> In this thesis we propose using an evolutionary technique in order to work toward solving these issues and increase the overall quality and abilities of Deep Learning Networks. In the evolution of a population of autoencoders for input reconstruction, we are able to abstract multiple features for each autoencoder in the form of hidden nodes, scoring the autoencoders based on their ability to reconstruct their input, and finally selecting autoencoders for crossover and mutation with hidden nodes as the chromosome. In this way we are able to not only quickly find optimal abstracted feature sets but also optimize the structure of the autoencoder to match the features being selected. This also allows us to experiment with different training methods in respect to data partitioning and selection, reducing overall training time drastically for large and complex datasets. This proposed method allows even large datasets to be trained quickly and efficiently with little manual parameter choice required by the user, leading to faster, more accurate creation of Deep Learning Networks.</p>
242

The Stanford-Binet, Form L-M, and the Wechsler Intelligence Scale for Children: A Comparative Study Utilizing Cultural-Familial and Undifferentiated Mental Retardates

Stone, John S. 08 1900 (has links)
The purpose of this study was to compare the results obtained on the Stanford-Binet, Form L-M, and the Wechsler Intelligence Scale for Children for a group of cultural-familial and undifferentiated mental retardates. Such as study should provide some evidence as to whether the two instruments adequately measure similar abilities and whether the IQ's obtained from one can be considered comparable with the IQ's obtained from the other.
243

Comparison of Group and Individual Methods of Presenting Baldwin's Social Expectations Scale

Pitts, Emily C. 05 1900 (has links)
Forty Ss from introductory psychology classes participated in a study to determine whether or not the investigator's group Social Expectations Scale (SES) was a useful research instrument and to determine whether or not intelligence was a factor determining the fit of a particular cognitive model, the BSE, to the social expectations of Ss as measured by the SES.
244

A Comparison of the California Test of Mental Maturity and the Wechsler Intelligence Scale for Children in Four Clinical Groups of School Children

Nichols, Leslie A. 08 1900 (has links)
The primary problem of this study was to compare the Wechsler Intelligence Scale for Children and the California Test of Mental Maturity S-F, 1962 Revision, in order to determine whether the two instruments were interchangeable with respect to intelligence quotients for a school-clinical population.
245

Ten Years After 9/11: the Structure and Use of Intelligence Units in Local Policing

Hollier, Michael P. 12 1900 (has links)
The events of September 11, 2001 marked a paradigm shift in the strategy within all levels of law enforcement in the United States. Intelligence became the watchword of the day and with it, the movement to incorporate strategic and tactical information in daily policing. Yet while the philosophy was clear, the method and manner to which agencies were left to achieve these goals was much less designed. The federal government allocated funds to assist help agencies incorporate an intelligence function in their daily operations but which agencies and to what degree remains unclear even today. This study seeks to determine the current state of use of intelligence in municipal law enforcement agencies in the State of Texas ten years after 9/11. Through use of a survey, it assesses the frequency of use of intelligence units in local police departments in the State of Texas, identifies commonalities in their structure, and determines the state of their effectiveness.
246

Reverse engineering an active eye

Schmidt-Cornelius, Hanson January 2002 (has links)
No description available.
247

An ontology model supporting multiple ontologies for knowledge sharing

Tamma, Valentina A. M. January 2001 (has links)
No description available.
248

L’intelligence organisationnelle : une nouvelle perspective pour l’amélioration de la capacité d’absorption de l’organisation / Organizational intelligence : a new perspective for improved the organization absorptive capacity

Slama, Boulbeba 03 December 2012 (has links)
Dans un environnement en pleine mutation, la gestion de l’information et de la connaissance est un immense défi pour l’organisation pour la création de la valeur, le maintien et le développement d’un avantage compétitif soutenable. Cependant, les rapides changements de l’environnement, des technologies et des règles de la concurrence ont aggravé les problèmes de l’organisation face à la réalisation de ces objectifs. Ainsi, face au rôle important joué par le concept de l’information et de la connaissance, des nombreuses entreprises se sont vu obligées, durant ces vingt dernières années, d'abandonner les vieux modèles et d'adopter de nouvelles démarches capables d’absorber les informations et les connaissances dispersées. Le processus d’absorption de l’information interne et externe est devenu un élément essentiel de la performance pour les firmes qui veulent s’adapter aux changements dans un environnement concurrentiel. En dépit de l’abondance de la littérature sur la capacité d’absorption, une lacune méthodologique et une ambiguïté théorique sur la spécification de sa définition et sa « dimensialisation » perdurent dans la plupart des études. L'objectif de cette recherche est de contribuer à la littérature sur la capacité d'absorption à travers la création et la validation de nouvelles perspectives ou mesures, justifié par une analyse approfondie de la littérature à travers l’intelligence économique et le knowledge management. A partir d’un modèle conceptuel nous vérifions les propriétés psychométriques de nos construits sur des données de 54 entreprises françaises. Les résultats de l’étude confirment la validité des échelles proposées. Ils soutiennent ainsi que leur consolidation comme un instrument utilisable permettant la mesure de la capacité d’absorption. / In a changing environment, management of information and knowledge is a huge challenge for the organization to value creation, maintenance and development of a sustainable competitive advantage. However, rapid changes in the environment, technology and competition rules have exacerbated the problems facing the organization to achieve these goals. Thus, given the important role played by the concept of information and knowledge, many companies have been forced, over the past twenty years, to abandon old models and adopt new approaches able to absorb information and knowledge scattered. The absorption process of internal and external information has become an essential part of the performance for firms that want to adapt to changes in the competitive environment. Despite the abundance of literature on absorptive capacity, a methodological lack and theoretical ambiguity about specifying its definition and "dimensialisation" persist in most studies. The objective of this research is to contribute to the literature on absorptive capacity through the creation and validation of new perspectives or measures, supported by a thorough analysis of the literature through business intelligence and knowledge management. From a conceptual model we test the psychometric properties of our constructed on data from 54 French companies. The results of the study confirm the validity of the proposed scales. So, they argue their consolidation and used as an instrument for measuring the absorption capacity.
249

Serious Games pour la e-Santé : application à la formation des médecins généralistes

Guo, Jing 16 September 2016 (has links) (PDF)
Les Jeux Sérieux (Serious Games) sont des jeux vidéo qui sont conçus avec un objectif premier qui n’est pas le divertissement. Les jeux sérieux sont de plus en plus utilisés dans le domaine de la santé en tant qu’outil éducatif dans le cadre de la formation à la médecine, ou pour aider au rétablissement des patients. Dans cette thèse, nous nous intéressons à la conception d’un jeu sérieux pour la formation des médecins généralistes, en nous intéressant tout particulièrement à l’apprentissage des compétences communicationnelles et interpersonnelles qui jouent un rôle très important dans le métier de médecin, et qui sont assez peu présentes dans les programmes des cursus de formation. Nous nous intéressons en particulier aux méthodologies de conception d’un tel jeu qui doit délivrer un contenu utilitaire tout en équilibrant apprentissage et divertissement. Afin de mener ce travail, nous présentons dans la première partie de la thèse une analyse des méthodes existantes de conception de jeux sérieux en étudiant en particulier les mécanismes permettant de motiver le joueur ainsi que les principaux design patterns de conception. Nous expliquons en quoi les jeux sérieux nécessitent une architecture particulière dont la principale caractéristique est de séparer clairement les concepts nécessaires à l’apprentissage de ceux liés à l’aspect ludique. Nous proposons ensuite une modélisation de la consultation médicale qui en plus de rendre compte du processus métier auquel elle correspond, permet de représenter les différents éléments nécessaires à l’implémentation algorithmique d’un moteur de dialogue entre un joueur et un patient virtuel. Cette modélisation utilise les ontologies pour décrire les connaissances impliquées et nous montrons comment un scénario de consultation médicale peut se décrire en termes d’instances de ces ontologies. Ces ontologies incluent quatre niveaux qui décrivent le profil du patient, le résultat de consultation, le scénario et la phrase. Cette description est accessible aux experts formateurs qui disposent donc d’un outil leur permettant de définir les objectifs pédagogiques que le joueur-apprenant doit atteindre au cours de la simulation. Ces analyses sont enfin appliquées au cas de la consultation médicale et nous décrivons l’architecture d’un jeu que nous avons conçu appelé AgileDoctor. Ce jeu a pour objectif de permettre à un apprenant de jouer le rôle d’un médecin qui mène des consultations médicales en accueillant des patients aux profils divers.
250

Automated Feature Engineering for Deep Neural Networks with Genetic Programming

Heaton, Jeff 19 April 2017 (has links)
<p> Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model's predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. </p><p> This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm's engineered features. </p>

Page generated in 0.0468 seconds