• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4567
  • 576
  • 275
  • 248
  • 156
  • 126
  • 83
  • 46
  • 44
  • 40
  • 21
  • 20
  • 19
  • 17
  • 12
  • Tagged with
  • 7738
  • 7738
  • 2455
  • 1396
  • 1307
  • 1231
  • 1178
  • 1144
  • 1096
  • 1094
  • 992
  • 952
  • 910
  • 907
  • 868
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Cosmetic quality of surfaces : a computational approach

Balendran, Velupillai January 1993 (has links)
No description available.
172

Using Channel-Specific Models to Detect and Mitigate Reverberation in Cochlear Implants

Desmond, Jill Marie January 2014 (has links)
<p>Cochlear implants (CIs) are devices that restore some level of hearing to deaf individuals. Because of their design and the impaired nature of the deafened auditory system, CIs provide listeners with limited spectral and temporal information, resulting in speech recognition that degrades more rapidly for CI listeners than for normal hearing listeners in noisy and reverberant environments (Kokkinakis and Loizou, 2011). This research project aimed to mitigate the effects of reverberation by directly manipulating the CI pulse train. A reverberation detection algorithm was initially developed to control processing when switching between the mitigation algorithm and a standard signal processing algorithm used when no mitigation is needed. Next, the benefit of removing two separate effects of reverberation was studied. Finally, two reverberation mitigation algorithms were developed. Because the two algorithms resulted in comparable performance, the effect of one algorithm on speech recognition was assessed in normal hearing (NH) and CI listeners. </p><p>Reverberation detection, which has not been thoroughly investigated in the CI literature, would provide a method to control the initiation of a reverberation mitigation algorithm. Although a mitigation algorithm would ideally remove reverberation without affecting non-reverberant signals, most noise and reverberation mitigation algorithms make errors and should only be applied when necessary. Therefore, a reverberation detection algorithm was designed to control the reverberation mitigation algorithm and thereby reduce unnecessary processing. The detection algorithm was implemented by first developing features from the frequency-time matrices that result from the standard CI speech processing algorithm. Next, using these features, a maximum a posteriori classifier was shown to successfully discriminate speech in quiet, reverberation, speech shaped noise, and white Gaussian noise with 94% accuracy.</p><p>In order to develop the mitigation algorithm that would be controlled by the reverberation detection algorithm, a unique approach to reverberation mitigation was considered. This research project hypothesized that focusing mitigation on one effect of reverberation, either self-masking (masking within an individual phoneme) or overlap-masking (masking of one phoneme by a preceding phoneme) (Bolt and MacDonald, 1949), may allow for a reverberation mitigation strategy that operates in real-time. In order to determine the feasibility of this approach, the benefit of mitigating the two effects of reverberation was assessed by comparing speech recognition scores for speech in reverberation to reverberant speech after ideal self-masking mitigation and to reverberant speech after ideal overlap-masking mitigation. Testing was completed with normal hearing listeners via an acoustic model as well as with CI listeners using their devices. Mitigating either effect was found to improve CI speech recognition in reverberant environments. These results suggested that a new, causal approach could be taken to reverberation mitigation.</p><p>Based on the success of the feasibility study, two initial overlap-masking mitigation algorithms were implemented and applied once reverberation was detected in speech stimuli. One algorithm processed a pulse train signal after CI speech processing, while the second algorithm processed the acoustic signal. Performance of the two overlap-masking mitigation algorithms was evaluated in simulation by comparing pulses that were determined to be overlap-masking with the known truth. Using the features explored in this work, performance was comparable between the two methods. Therefore, only the post-CI speech processing reverberation mitigation algorithm was implemented in a CI speech processing strategy. </p><p>An initial experiment was conducted, using NH listeners and an acoustic model designed to present the frequency and temporal information that would be available to a CI listener. Listeners were presented with speech stimuli in the presence of both mitigated and unmitigated simulated reverberant conditions, and speech recognition was found to improve after reverberation mitigation. A subsequent experiment, also using NH listeners and an acoustic model, explored the effects of recorded room impulse responses (RIRs) and added noise (speech shaped noise and multi-talker babble) on the mitigation strategy. Because reverberation mitigation did not consistently improve speech recognition in these conditions, an analysis of the fundamental differences between simulated and recorded RIRs was conducted. Finally, CI listeners were presented with simulated reverberant speech, both with and without reverberation mitigation, and the effect of the mitigation strategy on speech recognition was studied. Because the reverberation mitigation strategy did not consistently improve speech recognition, future work is required to analyze the effects of algorithm-specific parameters for CI listeners.</p> / Dissertation
173

Improved rule-based document representation and classification using genetic programming

Soltan-Zadeh, Yasaman January 2011 (has links)
No description available.
174

Using machine-learning to efficiently explore the architecture/compiler co-design space

Dubach, Christophe January 2009 (has links)
Designing new microprocessors is a time consuming task. Architects rely on slow simulators to evaluate performance and a significant proportion of the design space has to be explored before an implementation is chosen. This process becomes more time consuming when compiler optimisations are also considered. Once the architecture is selected, a new compiler must be developed and tuned. What is needed are techniques that can speedup this whole process and develop a new optimising compiler automatically. This thesis proposes the use of machine-learning techniques to address architecture/compiler co-design. First, two performance models are developed and are used to efficiently search the design space of amicroarchitecture. These models accurately predict performance metrics such as cycles or energy, or a tradeoff of the two. The first model uses just 32 simulations to model the entire design space of new applications, an order of magnitude fewer than state-of-the-art techniques. The second model addresses offline training costs and predicts the average behaviour of a complete benchmark suite. Compared to state-of-the-art, it needs five times fewer training simulations when applied to the SPEC CPU 2000 and MiBench benchmark suites. Next, the impact of compiler optimisations on the design process is considered. This has the potential to change the shape of the design space and improve performance significantly. A new model is proposed that predicts the performance obtainable by an optimising compiler for any design point, without having to build the compiler. Compared to the state-of-the-art, this model achieves a significantly lower error rate. Finally, a new machine-learning optimising compiler is presented that predicts the best compiler optimisation setting for any new program on any new microarchitecture. It achieves an average speedup of 1.14x over the default best gcc optimisation level. This represents 61% of the maximum speedup available, using just one profile run of the application.
175

Online intrusion detection design and implementation for SCADA networks

Wang, Hongrui 25 April 2017 (has links)
The standardization and interconnection of supervisory control and data acquisition (SCADA) systems has exposed the systems to cyber attacks. To improve the security of the SCADA systems, intrusion detection system (IDS) design is an effective method. However, traditional IDS design in the industrial networks mainly exploits the prede fined rules, which needs to be complemented and developed to adapt to the big data scenario. Therefore, this thesis aims to design an anomaly-based novel hierarchical online intrusion detection system (HOIDS) for SCADA networks based on machine learning algorithms theoretically and implement the theoretical idea of the anomaly-based intrusion detection on a testbed. The theoretical design of HOIDS by utilizing the server-client topology while keeping clients distributed for global protection, high detection rate is achieved with minimum network impact. We implement accurate models of normal-abnormal binary detection and multi-attack identification based on logistic regression and quasi-Newton optimization algorithm using the Broyden-Fletcher-Goldfarb-Shanno approach. The detection system is capable of accelerating detection by information gain based feature selection or principle component analysis based dimension reduction. By evaluating our system using the KDD99 dataset and the industrial control system datasets, we demonstrate that our design is highly scalable, e fficient and cost effective for securing SCADA infrastructures. Besides the theoretical IDS design, a testbed is modi ed and implemented for SCADA network security research. It simulates the working environment of SCADA systems with the functions of data collection and analysis for intrusion detection. The testbed is implemented to be more flexible and extensible compared to the existing related work on the testbeds. In the testbed, Bro network analyzer is introduced to support the research of anomaly-based intrusion detection. The procedures of both signature-based intrusion detection and anomaly-based intrusion detection using Bro analyzer are also presented. Besides, a generic Linux-based host is used as the container of different network functions and a human machine interface (HMI) together with the supervising network is set up to simulate the control center. The testbed does not implement a large number of traffic generation methods, but still provides useful examples of generating normal and abnormal traffic. Besides, the testbed can be modi ed or expanded in the future work about SCADA network security. / Graduate
176

Embodied Visual Object Recognition / Förkroppsligad objektigenkänning

Wallenberg, Marcus January 2017 (has links)
Object recognition is a skill we as humans often take for granted. Due to our formidable object learning, recognition and generalisation skills, it is sometimes hard to see the multitude of obstacles that need to be overcome in order to replicate this skill in an artificial system. Object recognition is also one of the classical areas of computer vision, and many ways of approaching the problem have been proposed. Recently, visually capable robots and autonomous vehicles have increased the focus on embodied recognition systems and active visual search. These applications demand that systems can learn and adapt to their surroundings, and arrive at decisions in a reasonable amount of time, while maintaining high object recognition performance. This is especially challenging due to the high dimensionality of image data. In cases where end-to-end learning from pixels to output is needed, mechanisms designed to make inputs tractable are often necessary for less computationally capable embodied systems.Active visual search also means that mechanisms for attention and gaze control are integral to the object recognition procedure. Therefore, the way in which attention mechanisms should be introduced into feature extraction and estimation algorithms must be carefully considered when constructing a recognition system.This thesis describes work done on the components necessary for creating an embodied recognition system, specifically in the areas of decision uncertainty estimation, object segmentation from multiple cues, adaptation of stereo vision to a specific platform and setting, problem-specific feature selection, efficient estimator training and attentional modulation in convolutional neural networks. Contributions include the evaluation of methods and measures for predicting the potential uncertainty reduction that can be obtained from additional views of an object, allowing for adaptive target observations. Also, in order to separate a specific object from other parts of a scene, it is often necessary to combine multiple cues such as colour and depth in order to obtain satisfactory results. Therefore, a method for combining these using channel coding has been evaluated. In order to make use of three-dimensional spatial structure in recognition, a novel stereo vision algorithm extension along with a framework for automatic stereo tuning have also been investigated. Feature selection and efficient discriminant sampling for decision tree-based estimators have also been implemented. Finally, attentional multi-layer modulation of convolutional neural networks for recognition in cluttered scenes has been evaluated. Several of these components have been tested and evaluated on a purpose-built embodied recognition platform known as Eddie the Embodied. / Embodied Visual Object Recognition / FaceTrack
177

A Machine Learning Method Suitable for Dynamic Domains

Rowe, Michael C. (Michael Charles) 07 1900 (has links)
The efficacy of a machine learning technique is domain dependent. Some machine learning techniques work very well for certain domains but are ill-suited for other domains. One area that is of real-world concern is the flexibility with which machine learning techniques can adapt to dynamic domains. Currently, there are no known reports of any system that can learn dynamic domains, short of starting over (i.e., re-running the program). Starting over is neither time nor cost efficient for real-world production environments. This dissertation studied a method, referred to as Experience Based Learning (EBL), that attempts to deal with conditions related to learning dynamic domains. EBL is an extension of Instance Based Learning methods. The hypothesis of the study related to this research was that the EBL method would automatically adjust to domain changes and still provide classification accuracy similar to methods that require starting over. To test this hypothesis, twelve widely studied machine learning datasets were used. A dynamic domain was simulated by presenting these datasets in an uninterrupted cycle of train, test, and retrain. The order of the twelve datasets and the order of records within each dataset were randomized to control for order biases in each of ten runs. As a result, these methods provided datasets that represent extreme levels of domain change. Using the above datasets, EBL's mean classification accuracies for each dataset were compared to the published static domain results of other machine learning systems. The results indicated that the EBL's system performance was not statistically different (p>0.30) from the other machine learning methods. These results indicate that the EBL system is able to adjust to an extreme level of domain change and yet produce satisfactory results. This finding supports the use of the EBL method in real-world environments that incur rapid changes to both variables and values.
178

Machine learning in systems biology at different scales : from molecular biology to ecology

Aderhold, Andrej January 2015 (has links)
Machine learning has been a source for continuous methodological advances in the field of computational learning from data. Systems biology has profited in various ways from machine learning techniques but in particular from network inference, i.e. the learning of interactions given observed quantities of the involved components or data that stem from interventional experiments. Originally this domain of system biology was confined to the inference of gene regulation networks but recently expanded to other levels of organization of biological and ecological systems. Especially the application to species interaction networks in a varying environment is of mounting importance in order to improve our understanding of the dynamics of species extinctions, invasions, and population behaviour in general. The aim of this thesis is to demonstrate an extensive study of various state-of-art machine learning techniques applied to a genetic regulation system in plants and to expand and modify some of these methods to infer species interaction networks in an ecological setting. The first study attempts to improve the knowledge about circadian regulation in the plant Arabidopsis thaliana from the view point of machine learning and gives suggestions on what methods are best suited for inference, how the data should be processed and modelled mathematically, and what quality of network learning can be expected by doing so. To achieve this, I generate a rich and realistic synthetic data set that is used for various studies under consideration of different effects and method setups. The best method and setup is applied to real transcriptional data, which leads to a new hypothesis about the circadian clock network structure. The ecological study is focused on the development of two novel inference methods that exploit a common principle from transcriptional time-series, which states that expression profiles over time can be temporally heterogeneous. A corresponding concept in a spatial domain of 2 dimensions is that species interaction dynamics can be spatially heterogeneous, i.e. can change in space dependent on the environment and other factors. I will demonstrate the expansion from the 1-dimensional time domain to the 2-dimensional spatial domain, introduce two distinct space segmentation schemes, and consider species dispersion effects with spatial autocorrelation. The two novel methods display a significant improvement in species interaction inference compared to competing methods and display a high confidence in learning the spatial structure of different species neighbourhoods or environments.
179

Autotuning wavefront patterns for heterogeneous architectures

Mohanty, Siddharth January 2015 (has links)
Manual tuning of applications for heterogeneous parallel systems is tedious and complex. Optimizations are often not portable, and the whole process must be repeated when moving to a new system, or sometimes even to a different problem size. Pattern based parallel programming models were originally designed to provide programmers with an abstract layer, hiding tedious parallel boilerplate code, and allowing a focus on only application specific issues. However, the constrained algorithmic model associated with each pattern also enables the creation of pattern-specific optimization strategies. These can capture more complex variations than would be accessible by analysis of equivalent unstructured source code. These variations create complex optimization spaces. Machine learning offers well established techniques for exploring such spaces. In this thesis we use machine learning to create autotuning strategies for heterogeneous parallel implementations of applications which follow the wavefront pattern. In a wavefront, computation starts from one corner of the problem grid and proceeds diagonally like a wave to the opposite corner in either two or three dimensions. Our framework partitions and optimizes the work created by these applications across systems comprising multicore CPUs and multiple GPU accelerators. The tuning opportunities for a wavefront include controlling the amount of computation to be offloaded onto GPU accelerators, choosing the number of CPU and GPU threads to process tasks, tiling for both CPU and GPU memory structures, and trading redundant halo computation against communication for multiple GPUs. Our exhaustive search of the problem space shows that these parameters are very sensitive to the combination of architecture, wavefront instance and problem size. We design and investigate a family of autotuning strategies, targeting single and multiple CPU + GPU systems, and both two and three dimensional wavefront instances. These yield an average of 87% of the performance found by offline exhaustive search, with up to 99% in some cases.
180

Exploiting Application Characteristics for Efficient System Support of Data-Parallel Machine Learning

Cui, Henggang 01 May 2017 (has links)
Large scale machine learning has many characteristics that can be exploited in the system designs to improve its efficiency. This dissertation demonstrates that the characteristics of the ML computations can be exploited in the design and implementation of parameter server systems, to greatly improve the efficiency by an order of magnitude or more. We support this thesis statement with three case study systems, IterStore, GeePS, and MLtuner. IterStore is an optimized parameter server system design that exploits the repeated data access pattern characteristic of ML computations. The designed optimizations allow IterStore to reduce the total run time of our ML benchmarks by up to 50×. GeePS is a parameter server that is specialized for deep learning on distributed GPUs. By exploiting the layer-by-layer data access and computation pattern of deep learning, GeePS provides almost linear scalability from single-machine baselines (13× more training throughput with 16 machines), and also supports neural networks that do not fit in GPU memory. MLtuner is a system for automatically tuning the training tunables of ML tasks. It exploits the characteristic that the best tunable settings can often be decided quickly with just a short trial time. By making use of optimization-guided online trial-and-error, MLtuner can robustly find and re-tune tunable settings for a variety of machine learning applications, including image classification, video classification, and matrix factorization, and is over an order of magnitude faster than traditional hyperparameter tuning approaches.

Page generated in 0.137 seconds