• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 69
  • 27
  • 3
  • 1
  • Tagged with
  • 1575
  • 256
  • 191
  • 127
  • 122
  • 115
  • 96
  • 94
  • 90
  • 79
  • 71
  • 60
  • 60
  • 59
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Understanding and exploiting user intent in community question answering

Chen, Long January 2014 (has links)
A number of Community Question Answering (CQA) services have emerged and proliferated in the last decade. Typical examples include Yahoo! Answers, WikiAnswers, and also domain-specific forums like StackOverflow. These services help users obtain information from a community - a user can post his or her questions which may then be answered by other users. Such a paradigm of information seeking is particularly appealing when the question cannot be answered directly by Web search engines due to the unavailability of relevant online content. However, question submitted to a CQA service are often colloquial and ambiguous. An accurate understanding of the intent behind a question is important for satisfying the user's information need more effectively and efficiently. In this thesis, we analyse the intent of each question in CQA by classifying it into five dimensions, namely: subjectivity, locality, navigationality, procedurality, and causality. By making use of advanced machine learning techniques, such as Co-Training and PU-Learning, we are able to attain consistent and significant classification improvements over the state-of-the-art in this area. In addition to the textual features, a variety of metadata features (such as the category where the question was posted to) are used to model a user's intent, which in turn help the CQA service to perform better in finding similar questions, identifying relevant answers, and recommending the most relevant answerers. We validate the usefulness of user intent in two different CQA tasks. Our first application is question retrieval, where we present a hybrid approach which blends several language modelling techniques, namely, the classic (query-likelihood) language model, the state-of-the-art translation-based language model, and our proposed intent-based language model. Our second application is answer validation, where we present a two-stage model which first ranks similar questions by using our proposed hybrid approach, and then validates whether the answer of the top candidate can be served as an answer to a new question by leveraging sentiment analysis, query quality assessment, and search lists validation.
12

Using machine learning for decoy discrimination in protein tertiary structure prediction

Tan, C. W. January 2006 (has links)
In this thesis, the novelty of using machine learning to identify the low-RMSD structures in decoy discrimination in protein tertiary structure prediction is investigated. More specifically, neural networks are used to learn to recognize low-RMSD structures, using native protein structures as positive training examples, and simulated decoy structures as negative training examples. Simulated decoy structures are derived by reversing the sequences of native structures in the set of positive training examples, and threading the reversed sequences back to the native structures. Various input features, extracted from these native and simulated decoy structures, are used as inputs to the neural networks. These input features are the identities of residue pairs, the separation between the residues along the sequence, the pairwise distance and the relative solvent accessibilities of the residues. Various neural networks are created depending on the amount of input features used. The neural networks are tested against the in-house pairwise potentials of mean force method, as well as against a K-Nearest Neighbours algorithm. The second novel idea of this thesis is to use evolutionary information in the decoy discrimination process. Evolutionary information, in the form of PSI-BLAST profiles, is used as inputs to the neural networks. Results have shown that the best performing neural network is the one that uses in put information comprising of PSI-BLAST profiles of residue pairs, pairwise distance and the relative solvent accessibilities of the residues. This neural network is the best among all methods tested, including the pairwise potentials method, in discriminating the native structures. Therefore this thesis has demonstrated the feasibility of using machine learning, more specifically neural networks, in the problem of decoy discrimination. More significantly, evolutionary information in the form of PSI-BLAST profiles has been success fully used to further improve decoy discrimination, particularly in the discrimination of native structures.
13

The application of ensemble neural networks for partial discharge pattern recognition

Mas'ud, Abdullahi Abubakar January 2013 (has links)
One technique of examining failures in the insulation of high voltage (HV) plant is through the evaluation of partial discharge (PD). PDs are electrical sparks that can deteriorate the insulation of HV equipment. However, once present, they become the principal mechanism of deterioration and can cause complete failure of the system leading to capital cost and economic consequences. As a consequence, developing techniques to characterize and classify PD is of profound importance to condition monitoring engineers. Indeed, since the nature, form and characteristics of PD have been widely investigated and in many ways established, it is vital to determine novel techniques that can effectively classify PD patterns and give a reliable assessment of the nature of the PD fault. In this thesis, enhanced PD pattern recognition tools are developed. The strategy concentrates for the first time on the application of ensemble neural networks (ENNs) to classify PD statistical patterns. The capability of the ENN to distinguish PD patterns has been extensively investigated and its performance compared with the widely applied single neural network (SNN). The ENN is shown to be more robust and generally demonstrates improved classification potential over the SNN in classifying PD fault statistical features and their progressive degradation. The ENN can also discriminate PD patterns between arrangements of one or 2 voids, different point-to-earth oil-gap discharges and angular positioning of the points on pressboard. Finally, this thesis investigates for the first time the influence on the SNN and ENN of phase resolution (PR) and amplitude bin (AB) size of the cp - q - n (phase-amplitude-number) statistical fingerprints. The result shows that there is apparent statistical distinction for different PR and AB sizes on some of the statistical cp - q - n distributions. Additionally, the ENN and SNN outputs can change depending on training and testing with different PR and AB sizes and that an optimised PR or AB may be shown to exist.
14

Effective tutoring with empathic embodied conversational agents

Moyo, Sharon G. January 2014 (has links)
This thesis examines the prospect of using empathy in an Embodied Tutoring System (ETS) that guides students through an online quiz (by providing feedback on student answers and responding to self-reported student emotion). The ETS seeks to imitate human behaviours successfully used in one-to-one human tutorial interactions. The main hypothesis is that the interaction with an empathic ETS results in greater learning gains than a neutral ETS, primarily by encouraging positive and reducing negative student emotions using empathic feedback. In a preparatory study we investigated different strategies for expressing emotion by the ETS. We established that a multimodal strategy achieves the best results regarding how accurately human participants can recognise the emotions. This approach was used in developing the feedback strategy for our empathic ETS. The preparatory study was followed by two studies in which we compared a neutral with an empathic ETS. The ETS in the second of these studies was developed using results from the first of these studies. In both studies, we found no statistically significant difference in learning gains between the neutral and empathic ETS. However, we did discover a number of interactions between the ETS system, learning gains and, in particular 1) student scores on an empathic tendency test and 2) student ability. We also analysed the subjective responses and the relation between self-reported emotions during the quiz and student learning gains. Based on our studies in a formal class room setting, we assess the prospects of using empathic agents in a classroom setting and describe a number of requirements for their effective use.
15

Algorithms for power savings

Atkins, Leon January 2014 (has links)
The aim of this thesis is to analyse the real-world performance of existing speed scaling algorithms and show how to improve these algorithms by using knowledge of the data the algorithms are running on. The method for doing this was running simulations of different speed scaling algorithms, using both real-world data and simulated data. In addition, the thesis improves the best known competitive ratio for minimising the maximum temperature of a schedule by an order 6f magnitude over previous results. This shows that different algorithms work better on certain types of data than others, and so the input data should be taken into account when choosing a speed scaling algorithm to run. This also means that the best performing speed scaling algorithm is not always that with the lowest competitive ratio, and to achieve the best performance, other factors should be taken into account when choosing which algorithm to run. In addition, an algorithm for minimising the maximum temperature is given. This algorithm is an order of magnitude improvement on the previous best known algorithm, and provides a novel technique for directly analysing the temperature competitiveness of an algorithm. Overall the thesis provides novel methods of improving the real-world performance of speed scaling. It both gives improved results for temperature scheduling, and also gives a new algorithm that can give improved performance on real-world data by taking the input into account. This is in contrast to previous speed scaling algorithms that only use factors such as the number of jobs to decide at what speed to run.
16

A type-2 fuzzy logic approach for multi-criteria group decision making

Naim, Nur Syibrah Muhamad January 2014 (has links)
Multi-Criteria Group Decision Making (MCGDM) is a decision tool which is able to find a unique agreement from a group of decision makers (DMs) by evaluating various conflicting criteria. However, the current multi-criteria decision making with a group of DMs (MCGDM) techniques do not effectively deal with the large number of possibilities that cause disagreement between different judgements and the variety of ideas and opinions among the decision makers which lead to high_uncertainty levels. There is a growing interest to investigate techniques to handle the faced uncertainties in many decision making applications. Studies in fuzzy decision making have grown rapidly in the utilisation of extended fuzzy set theories (i.e., Type-2 Fuzzy Sets, Intuitionistic Fuzzy Sets, Hesitant Fuzzy Sets, Vague Sets, Interval-valued Fuzzy Sets; etc.) to evaluate the faced uncertainties.
17

Methods for tackling games of strict competition

Samothrakis, Spyridon January 2014 (has links)
The primary goal of this thesis is to develop algorithms that can approximately but robustly solve strictly competitive games. Two streams of research are explored, which also form the main contributions of this thesis. The first one involves transferring techniques used in combinatorial games to real-time video games, allowing for strong players that can take decisions fast. The second one involves using evolutionary computation to approximate solutions in an off-line fashion in games of both perfect and imperfect information. The algorithms proposed are presented in this thesis alongside a number of experiments, which involve two real-time games (Tron and Pacman), a strategy board game (Othello) and a game of imperfect information (2-Player Texas Holdem). The experiments cover a wide range of game scenarios, each aimed at uncovering different facets of the algorithms used. For real time games we conclude that strong a priori (or habitual) knowledge is required in order to act fast and successfully, but a player can massively benefit if this knowledge is combined with strong forward model exploitation methods like Monte-Carlo Tree Search. We show that Evolutionary Algorithms can be successfully used to obtain such a priori knowledge. Finally, for games of imperfect information, we show that one is able to obtain strong players offline using a novel iterative method, however limitations in the function approximation schemes used mean that these methods are not optimal.
18

Towards better uncertainty handling based on zSlices and general type-2 fuzzy logic systems

Wagner, Christian January 2015 (has links)
This thesis presents an investigation towards better uncertainty handling using zSlices and general type-2 fuzzy systems. Uncertainty as a concept is investigated and its perception through history is reviewed, together with the several theories such as probability and possibility based theories which have been proposed to address uncertainty. The field of fuzzy logic, its roots and its complementing nature when considering probability and fuzzy logic is detailed, while progressing to focus on recent developments in uncertainty handling in the field of fuzzy logic, in particular the application of interval type-2 fuzzy logic systems and an emerging interest in general type-2 fuzzy systems. As a first step towards better uncertainty handling, the combination of stochastic search techniques such as genetic algorithms and interval type-2 fuzzy logic systems is investigated both from a conceptual and an experimental point of view. Several interesting results, in particular in terms of the uncertainty representation of interval type-2 fuzzy sets, which cannot account for potentially complex distributions of uncertainty, introduce the specific focus on general type-2 fuzzy systems. While general type-2 fuzzy systems have been largely avoided because of their significant complexity, the complex three-dimensional representation of uncertainty associated with general type-2 fuzzy sets promises significant improvements in accurate uncertainty handling. This thesis presents the framework of zSlices based general type-2 fuzzy systems which allows for the implementation and application of general type-2 fuzzy systems while avoiding most of the complexity (in terms of computation, implementation and interpretability) of standard general type-2 fuzzy logic systems. The complete theoretical framework of zSlices based general type-2 fuzzy systems is presented, including proofs and examples of all major operations Goin, meet, centroid and type-reduction) required in the design of fuzzy logic controllers. Furthermore, a complete zSlices based general type-2 fuzzy logic controller implementing an edge-following behaviour of a two-wheeled mobile robot has been implemented and its performance investigated. In particular, the zSlices based general type-2 fuzzy logic controllers have been compared to corresponding type-l and interval type-2 fuzzy logic controllers which has allowed to identify potential shortcomings in simplified representation of uncertainty (such as in interval type-2 fuzzy systems) which can be addressed using general type-2 fuzzy systems.
19

Intelligent communication and information processing for cyber-physical data

Ganz, Frieder January 2014 (has links)
There is a growing trend towards integrating physical data into the Internet which is supported by sensor devices, smartphones, GPS and many other sources that capture and communicate real world data. Cyber-Physical Data describes the type of data that represents observations and measurements gathered by sensor devices. These sensor devices are capable of transforming physical information (e.g. light, temperature, coordinates) into digitised data. With tremendous volumes of Cyber-Physical Data that are created, novel methods have to be developed that facilitate processing and provisioning of the data. Automated techniques are required to extract and infer meaningful abstractions for the end-user and/or higherlevel knowledge. Investigation of the related work leads to the conclusion that there has been significant work on communication and processing aspects of Cyber-Physical Data, however, there is a need for integrated solutions that contemplate the workflow from data acquisition to extraction and knowledge representation. We propose a set of novel solutions for Cyber-Physical Data communication and information processing by providing a middleware component that contains management and communication processing capabilities to deliver actionable knowledge to the end-user and services. We have developed a novel data abstraction method for Cyber-Physical Data. The abstraction method is based on a probabilistic graph model and machine-learning teclmiques to extract relevant information and infer knowledge from patterns that are represented by the abstracted data. The proposed approach is able to create human-readable/machine-interpretable abstractions from numerical sensor data with precision rate of 79% and recall of 94%. The automated ontology construction algorithm has a success rate of 84% of representing occurred events in the ontology. Finally, an integrated software system is introduced that uses the middleware and the information processing techniques to provide a complete workflow from data acquisition to knowledge acquisition and representation.
20

Social media based scalable concept detection

Chatzilari, Elisavet January 2014 (has links)
Although over the past decades there has been remarkable progress in the field of computer vision, scientists are still confronted with the problem of designing techniques and frameworks that can easily scale to many different domains and disciplines. It is true that state of the art approaches cannot produce highly effective models, unless there is dedicated, and thus costly, human supervision in the process of learning. Recently, we have been witnessing the rapid growth of social media (e.g. images, videos, etc.) that emerged as the result of users' willingness to communicate, socialize, collaborate and share content. The outcome of this massive activity was the generation of a tremendous volume of user contributed data available on the Web, usually along with an indication of their meaning (i.e. tags). This has motivated researchers to investigate whether the Collective Intelligence that emerges from the users' contributions inside a Web 2.0 application, can be used to remove or ease the burden for dedicated human supervision. By doing so, this social content can facilitate scalable but also effective learning. In this thesis we contribute towards this goal by tackling scalability in two ways; first, we opt to gather effortlessly high quality training content in order to facilitate scalable learning to numerous concepts, which will be referred to as system scalability. Towards this goal, we examine the potential of exploiting user tagged images for concept detection under both unsupervised and semi-supervised frameworks. Second, we examine the scalability issue from the perspective of computational complexity, which we will refer to as computational scalability. In this direction, we opt to minimize the computational cost while at the same time minimize the inevitable performance loss by predicting the most prominent concepts to process further.

Page generated in 0.0245 seconds