• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5564
  • 1075
  • 771
  • 625
  • 541
  • 357
  • 146
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 85
  • Tagged with
  • 11579
  • 6117
  • 2565
  • 2023
  • 1688
  • 1419
  • 1371
  • 1323
  • 1217
  • 1139
  • 1076
  • 1044
  • 1018
  • 899
  • 890
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Automated Feature Engineering for Deep Neural Networks with Genetic Programming

Heaton, Jeff 19 April 2017 (has links)
<p> Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model's predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. </p><p> This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm's engineered features. </p>
252

Creating emotionally aware performance environments : a phenomenological exploration of inferred and invisible data space

Povall, Richard Mark January 2003 (has links)
The practical research undertaken for this thesis - the building of interactive and non-interactive environments for performance - posits a radical recasting of the performing body in physical and digital space. The choreographic and thematic context of the performance work has forced us', as makers, to ask questions about the nature of digital interactivity which in turn feeds the work theoretically, technically and thematically. A computer views (and attempts to interpret) motion information through a video camera, and, by way of a scripting language, converts that information into MIDI' data. As the research has developed, our company has been able to design environments which respond sensitivelyto particular artistic / performance demands. I propose to show in this research that is it possible to design an interactive system that is part of a phenomenological performance space, a mechanical system with an ontological heart. This represents a significant shift in thinking from existing systems, is at the heart of the research developments and is what I consider to be one of the primary outcomes of this research, outcomes that are original and contribute to the body of knowledge in this area. The phenomenal system allows me to use technology in a poetic way, where the poetic aesthetic is dominant - it responds to the phenomenal dancer, rather than merely to the 'physico-chemical' (Merleau-Ponty 1964 pp. 10-I I) dancer. Other artists whose work attempts phenomenological approaches to working with technology and the human body are referenced throughout the writing.
253

Emotional intelligence as determinant of the ideal characteristics to deliver the best service to customers

13 August 2012 (has links)
M.B.A. / Applications of Emotional Intelligence in the workplace are almost infinite. Emotional Intelligence is instrumental in resolving a sticky problem with a coworker, closing a deal with a difficult customer, criticising your boss, staying on top of a task until it is completed, and in many other challenges affecting your success. Emotional Intelligence is used both interpersonally (helping yourself) and interpersonally (helping others) (Weisinger, 1998:xvi). One of the most difficult and rewarding practices of emotional intelligence is to help others help themselves (Weisinger, 1998:181). A work organisation is an integrated system that depends upon the interrelationship of the individuals who are part of it. How each person performs affects the company as a whole. That's why it is important to the success of the company not only that all employees perform to the best of their abilities but that they also help others do the same (Weisinger, 1998:183). A general attitude toward one's job; the difference between the amount of rewards workers receive and the amount they believe they should receive. A person's job is more than just the obvious activities — it requires interaction with co-workers and bosses, following organisational rules and policies, meeting performance standards, living with working conditions that are often less than ideal. Therefore job satisfaction is not straight forward (Robbins 1996: 190). Service variability refers to the unwanted or random levels of service quality customers receive when they patronise a service. Variability is primarily caused by the human element, although machines may malfunction causing a variation in the service. Various service employees will perform the same service differently and even the same service employees will provide varying levels of service from one time to another. Unfortunately, because of the variability characteristic of services, standardisation and quality control are more difficult (Kurtz & Clow 1998: 14). To ensure quality at the source refers to the philosophy of making each worker responsible for the quality of his work. This incorporates the notions of do it right. Workers are expected to provide goods or services that meet specifications and to find and correct mistakes that occur. Each worker becomes a quality inspector for his own work (Stevenson 1996: 103). This dissertation is therefore looking at the different viewpoints of experts on emotional intelligence and to identify characteristics important to render quality client service.
254

Various considerations on performance measures for a classification of ordinal data

Nyongesa, Denis Barasa 13 August 2016 (has links)
<p> The technological advancement and the escalating interest in personalized medicine has resulted in increased ordinal classification problems. The most commonly used performance metrics for evaluating the effectiveness of a multi-class ordinal classifier include; predictive accuracy, Kendall's tau-b rank correlation, and the average mean absolute error (AMAE). These metrics are beneficial in the quest to classify multi-class ordinal data, but no single performance metric incorporates the misclassification cost. Recently, distance, which finds the optimal trade-off between the predictive accuracy and the misclassification cost was proposed as a cost-sensitive performance metric for ordinal data. This thesis proposes the criteria for variable selection and methods that accounts for minimum distance and improved accuracy, thereby providing a platform for a more comprehensive and comparative analysis of multiple ordinal classifiers. The strengths of our methodology are demonstrated through real data analysis of a colon cancer data set.</p>
255

Market Intelligence : A literature review

Bohlin, Sofia, Inha, Eini January 2017 (has links)
The aim of this paper is to provide insights of market intelligence and answer to the question “What is market intelligence?” by reviewing existing literature of market intelligence. This study also aims to investigate the connection between market intelligence and Game theory, which is believed by the authors to create the foundation for market intelligence studies. The search of relevant material for this literature review was conducted by using the databases of Halmstad University and Google Scholar. Due to the lack of literature on market intelligence as an overall theory, also other literature, such as books, were utilized besides the articles. This study recognizes six theoretical connections based on the reviewed literature and Game theory. Also, a general definition of market intelligence was recognized as a result of the literature review.
256

An analysis of learning in weightless neural systems

Bradshaw, Nicholas P. January 1997 (has links)
This thesis brings together two strands of neural networks research - weightless systems and statistical learning theory - in an attempt to understand better the learning and generalisation abilities of a class of pattern classifying machines. The machines under consideration are n-tuple classifiers. While their analysis falls outside the domain of more widespread neural networks methods the method has found considerable application since its first publication in 1959. The larger class of learning systems to which the n-tuple classifier belongs is known as the set of weightless or RAM-based systems, because of the fact that they store all their modifiable information in the nodes rather than as weights on the connections. The analytical tools used are those of statistical learning theory. Learning methods and machines are considered in terms of a formal learning problem which allows the precise definition of terms such as learning and generalisation (in this context). Results relating the empirical error of the machine on the training set, the number of training examples and the complexity of the machine (as measured by the Vapnik- Chervonenkis dimension) to the generalisation error are derived. In the thesis this theoretical framework is applied for the first time to weightless systems in general and to n-tuple classifiers in particular. Novel theoretical results are used to inspire the design of related learning machines and empirical tests are used to assess the power of these new machines. Also data-independent theoretical results are compared with data-dependent results to explain the apparent anomalies in the n-tuple classifier's behaviour. The thesis takes an original approach to the study of weightless networks, and one which gives new insights into their strengths as learning machines. It also allows a new family of learning machines to be introduced and a method for improving generalisation to be applied.
257

Cultural mediation and cognitive development in two Jewish communities

Redhill, Karen Jennifer January 2015 (has links)
No description available.
258

Rule extraction using destructive learning in artificial neural networks

Unknown Date (has links)
The use of inductive learning to extract general rules from examples would be a promising way to overcome the knowledge acquisition bottleneck. Over the last decade, many such techniques have been proposed. None of these have proved to be the efficient, general rule-extractors for complex real-world applications. Recent research has indicated that some kinds of hybrid-learning techniques which integrate two or more learning strategies outperform single learning techniques. In designing such a hybrid-learning method, neural network learning can be expected to be a good partner because it is tolerant for noisy data and is very flexible for approximate data. / This dissertation proposes another such method--a rule extraction method using an artificial neural network (ANN) that is trained by destructive learning. Unlike other published methods, the method proposed here takes advantage of the smart (pruned) network which contains more exact knowledge regarding the problem domain (environment). The method consists of three phases: training, pruning, and rule-extracting. The training phase is concerned with ANN learning, using a general backpropagation (BP) learning algorithm. In the pruning phase, redundant hidden units and links are deleted from a trained network, and then, the link weights remaining in the network are retrained to obtain near-saturated outputs from hidden units. The rule extraction algorithm uses the pruned network to extract rules. / The proposed method is evaluated empirically on three application domains--the MONK's problems, the IRIS-classification data set, and the thyroid-disease diagnosis data set--and its performance is compared with that of other classification and/or machine learning methods. It is shown that for discrete samples, the proposed method outperforms others, while for continuous samples it can beat most other methods with which it is compared. The classifying accuracy of the proposed method is higher than that of either backpropagation learning or the pruned network on which it is based. / Source: Dissertation Abstracts International, Volume: 55-04, Section: B, page: 1526. / Major Professor: R. C. Lacher. / Thesis (Ph.D.)--The Florida State University, 1994.
259

Modified election methodology: A methodology for describing human beliefs

Unknown Date (has links)
This dissertation presents Modified Election (or ME) methodology and shows how it may be used to describe the beliefs a human expert would form regarding the answer to a given question, based on the available evidence. For example, the methodology could be used to describe the beliefs a heart specialist would form, regarding the question whether a patient should be put on a low fat, low cholesterol diet, based on whether the patient is overweight, has a family history of heart problems, etc. ME methodology employs statistical methods used to interpret random samples, as well as the concept of a "Modified Election" which is developed in this dissertation. In ME methodology, the numbers of "votes" for the possible outcomes in a modified election are used to weight the different pieces of evidence which might affect an expert's beliefs. / Two other popular formalisms for describing beliefs are Bayesian theory and Dempster/Shafer theory. Certain problematic aspects of these two formalisms which motivated ME methodology are discussed. It is then shown how ME methodology overcomes these problems. ME methodology may be used as the basis for the design of expert systems. An expert system is presented which illustrates how to do this. / Source: Dissertation Abstracts International, Volume: 54-04, Section: B, page: 2068. / Major Professor: Daniel G. Schwartz. / Thesis (Ph.D.)--The Florida State University, 1993.
260

A cognitive hinting structure for deep domain knowledge

Unknown Date (has links)
A framework is presented for the acquisition of domain-specific knowledge from experts. This framework is referred to as the ENVIRONMENTAL HINTING (ENVHINT) framework. ENVHINT attempts to steer the expert's focus to the derivation of expert knowledge by embedding acquisition of expert knowledge in the dynamics of the environment which influenced expertise development. Within this framework, the research focuses on the development of cognitive structures which can be used to develop probing domain-specific questions. / Cognitive structures are developed from urban residents' repertory grids which are based on personal construct theory. A cognitive structure reveals dependencies in the form of construct equivalence classes and implications from one equivalence class to another. / Weights are assigned to the implication lines of a cognitive structure. They are obtained from a fuzzy grid, from which the cognitive structure is derived. The weights allow paths to be accessed according to relevancy of urban concerns. The relevancy strengths of paths are used to derive hinting domain-specific questions for experts. / Source: Dissertation Abstracts International, Volume: 54-04, Section: B, page: 2071. / Major Professor: Wyllis Bandler. / Thesis (Ph.D.)--The Florida State University, 1993.

Page generated in 0.0397 seconds