Spelling suggestions: "subject:"cachine learning"" "subject:"amachine learning""
281 |
Cosmetic quality of surfaces : a computational approachBalendran, Velupillai January 1993 (has links)
No description available.
|
282 |
Using Channel-Specific Models to Detect and Mitigate Reverberation in Cochlear ImplantsDesmond, Jill Marie January 2014 (has links)
<p>Cochlear implants (CIs) are devices that restore some level of hearing to deaf individuals. Because of their design and the impaired nature of the deafened auditory system, CIs provide listeners with limited spectral and temporal information, resulting in speech recognition that degrades more rapidly for CI listeners than for normal hearing listeners in noisy and reverberant environments (Kokkinakis and Loizou, 2011). This research project aimed to mitigate the effects of reverberation by directly manipulating the CI pulse train. A reverberation detection algorithm was initially developed to control processing when switching between the mitigation algorithm and a standard signal processing algorithm used when no mitigation is needed. Next, the benefit of removing two separate effects of reverberation was studied. Finally, two reverberation mitigation algorithms were developed. Because the two algorithms resulted in comparable performance, the effect of one algorithm on speech recognition was assessed in normal hearing (NH) and CI listeners. </p><p>Reverberation detection, which has not been thoroughly investigated in the CI literature, would provide a method to control the initiation of a reverberation mitigation algorithm. Although a mitigation algorithm would ideally remove reverberation without affecting non-reverberant signals, most noise and reverberation mitigation algorithms make errors and should only be applied when necessary. Therefore, a reverberation detection algorithm was designed to control the reverberation mitigation algorithm and thereby reduce unnecessary processing. The detection algorithm was implemented by first developing features from the frequency-time matrices that result from the standard CI speech processing algorithm. Next, using these features, a maximum a posteriori classifier was shown to successfully discriminate speech in quiet, reverberation, speech shaped noise, and white Gaussian noise with 94% accuracy.</p><p>In order to develop the mitigation algorithm that would be controlled by the reverberation detection algorithm, a unique approach to reverberation mitigation was considered. This research project hypothesized that focusing mitigation on one effect of reverberation, either self-masking (masking within an individual phoneme) or overlap-masking (masking of one phoneme by a preceding phoneme) (Bolt and MacDonald, 1949), may allow for a reverberation mitigation strategy that operates in real-time. In order to determine the feasibility of this approach, the benefit of mitigating the two effects of reverberation was assessed by comparing speech recognition scores for speech in reverberation to reverberant speech after ideal self-masking mitigation and to reverberant speech after ideal overlap-masking mitigation. Testing was completed with normal hearing listeners via an acoustic model as well as with CI listeners using their devices. Mitigating either effect was found to improve CI speech recognition in reverberant environments. These results suggested that a new, causal approach could be taken to reverberation mitigation.</p><p>Based on the success of the feasibility study, two initial overlap-masking mitigation algorithms were implemented and applied once reverberation was detected in speech stimuli. One algorithm processed a pulse train signal after CI speech processing, while the second algorithm processed the acoustic signal. Performance of the two overlap-masking mitigation algorithms was evaluated in simulation by comparing pulses that were determined to be overlap-masking with the known truth. Using the features explored in this work, performance was comparable between the two methods. Therefore, only the post-CI speech processing reverberation mitigation algorithm was implemented in a CI speech processing strategy. </p><p>An initial experiment was conducted, using NH listeners and an acoustic model designed to present the frequency and temporal information that would be available to a CI listener. Listeners were presented with speech stimuli in the presence of both mitigated and unmitigated simulated reverberant conditions, and speech recognition was found to improve after reverberation mitigation. A subsequent experiment, also using NH listeners and an acoustic model, explored the effects of recorded room impulse responses (RIRs) and added noise (speech shaped noise and multi-talker babble) on the mitigation strategy. Because reverberation mitigation did not consistently improve speech recognition in these conditions, an analysis of the fundamental differences between simulated and recorded RIRs was conducted. Finally, CI listeners were presented with simulated reverberant speech, both with and without reverberation mitigation, and the effect of the mitigation strategy on speech recognition was studied. Because the reverberation mitigation strategy did not consistently improve speech recognition, future work is required to analyze the effects of algorithm-specific parameters for CI listeners.</p> / Dissertation
|
283 |
Improved rule-based document representation and classification using genetic programmingSoltan-Zadeh, Yasaman January 2011 (has links)
No description available.
|
284 |
Using machine-learning to efficiently explore the architecture/compiler co-design spaceDubach, Christophe January 2009 (has links)
Designing new microprocessors is a time consuming task. Architects rely on slow simulators to evaluate performance and a significant proportion of the design space has to be explored before an implementation is chosen. This process becomes more time consuming when compiler optimisations are also considered. Once the architecture is selected, a new compiler must be developed and tuned. What is needed are techniques that can speedup this whole process and develop a new optimising compiler automatically. This thesis proposes the use of machine-learning techniques to address architecture/compiler co-design. First, two performance models are developed and are used to efficiently search the design space of amicroarchitecture. These models accurately predict performance metrics such as cycles or energy, or a tradeoff of the two. The first model uses just 32 simulations to model the entire design space of new applications, an order of magnitude fewer than state-of-the-art techniques. The second model addresses offline training costs and predicts the average behaviour of a complete benchmark suite. Compared to state-of-the-art, it needs five times fewer training simulations when applied to the SPEC CPU 2000 and MiBench benchmark suites. Next, the impact of compiler optimisations on the design process is considered. This has the potential to change the shape of the design space and improve performance significantly. A new model is proposed that predicts the performance obtainable by an optimising compiler for any design point, without having to build the compiler. Compared to the state-of-the-art, this model achieves a significantly lower error rate. Finally, a new machine-learning optimising compiler is presented that predicts the best compiler optimisation setting for any new program on any new microarchitecture. It achieves an average speedup of 1.14x over the default best gcc optimisation level. This represents 61% of the maximum speedup available, using just one profile run of the application.
|
285 |
Quantifying the Trenches: Machine Learning Applied to NFL Offensive Lineman ValuationPyne, Sean 01 January 2017 (has links)
There are 32 teams in the National Football League all competing to be the best by creating the strongest roster possible. The problem of evaluating talent has created extreme competition between teams in the form of a rookie draft and a fiercely competitive veteran free agent market. The difficulty with player evaluation is due to the noise associated with measuring a particular player’s value. The intent of this paper is to create an algorithm for identifying the inefficiencies in pricing in these player markets. In particular, this paper focuses on the veteran free agent market for offensive linemen in the NFL. NFL offensive linemen are difficult to evaluate empirically because of the significant amount of noise present due to an inability to measure a lineman’s performance directly. The algorithm first uses a machine learning technique, k-means cluster analysis, to generate a comparative set of offensive lineman. Then using that set of comparable offensive linemen, the algorithm flags any lineman that vary significantly in earnings from their peers. It is in this fashion that the algorithm provides relative valuations for particular offensive lineman.
|
286 |
Online intrusion detection design and implementation for SCADA networksWang, Hongrui 25 April 2017 (has links)
The standardization and interconnection of supervisory control and data acquisition
(SCADA) systems has exposed the systems to cyber attacks. To improve the security of the SCADA systems, intrusion detection system (IDS) design is an effective method. However, traditional IDS design in the industrial networks mainly exploits the prede fined rules, which needs to be complemented and developed to adapt to the big data scenario. Therefore, this thesis aims to design an anomaly-based novel hierarchical online intrusion detection system (HOIDS) for SCADA networks based on machine learning algorithms theoretically and implement the theoretical idea of the anomaly-based intrusion detection on a testbed. The theoretical design of HOIDS by utilizing the server-client topology while keeping clients distributed for global protection, high detection rate is achieved with minimum network impact. We implement accurate models of normal-abnormal binary detection and multi-attack identification based on logistic regression and quasi-Newton optimization algorithm using the Broyden-Fletcher-Goldfarb-Shanno approach. The detection system is capable of accelerating detection by information gain based feature selection or principle component analysis based dimension reduction. By evaluating our system using the KDD99 dataset and the industrial control system datasets, we demonstrate that our design is highly scalable, e fficient and cost effective for securing SCADA infrastructures. Besides the theoretical IDS design, a testbed is modi ed and implemented for SCADA network security research. It simulates the working environment of SCADA systems with the functions of data collection and analysis for intrusion detection. The testbed is implemented to be more flexible and extensible compared to the existing related work on the testbeds. In the testbed, Bro network analyzer is introduced to support the research of anomaly-based intrusion detection. The procedures of both signature-based intrusion detection and anomaly-based intrusion detection using Bro analyzer are also presented. Besides, a generic Linux-based host is used as the container of different network functions and a human machine interface (HMI) together
with the supervising network is set up to simulate the control center. The testbed does not implement a large number of traffic generation methods, but still provides useful examples of generating normal and abnormal traffic. Besides, the testbed can be modi ed or expanded in the future work about SCADA network security. / Graduate
|
287 |
Domain adaptation for classifying disaster-related Twitter dataSopova, Oleksandra January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Machine learning is the subfield of Artificial intelligence that gives computers the ability to learn without being explicitly programmed, as it was defined by Arthur Samuel - the American pioneer in the field of computer gaming and artificial intelligence who was born in Emporia, Kansas.
Supervised Machine Learning is focused on building predictive models given labeled training data. Data may come from a variety of sources, for instance, social media networks.
In our research, we use Twitter data, specifically, user-generated tweets about disasters such as floods, hurricanes, terrorist attacks, etc., to build classifiers that could help disaster management teams identify useful information.
A supervised classifier trained on data (training data) from a particular domain (i.e. disaster) is expected to give accurate predictions on unseen data (testing data) from the same domain, assuming that the training and test data have similar characteristics. Labeled data is not easily available for a current target disaster.
However, labeled data from a prior source disaster is presumably available, and can be used to learn a supervised classifier for the target disaster.
Unfortunately, the source disaster data and the target disaster data may not share the same characteristics, and the classifier learned from the source may not perform well on the target. Domain adaptation techniques, which use unlabeled target data in addition to
labeled source data, can be used to address this problem.
We study single-source and multi-source domain adaptation techniques, using Nave Bayes classifier.
Experimental results on Twitter datasets corresponding to six disasters show that domain adaptation techniques improve the overall performance as compared to basic supervised learning classifiers.
Domain adaptation is crucial for many machine learning applications, as it enables the use of unlabeled data in domains where labeled data is not available.
|
288 |
Predicting the concentration of residual methanol in industrial formalin using machine learning / Forutspå koncentrationen av resterande metanol i industriell formalin med hjälp av maskininlärningHeidkamp, William January 2016 (has links)
In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to provide the best results and was used to create the predictive model. The model was compiled to a stand-alone application with a graphical user interface using the MATLAB CompilerTM.
|
289 |
A Machine Learning Approach to Determine Oyster Vessel BehaviorFrey, Devin 16 December 2016 (has links)
A support vector machine (SVM) classifier was designed to replace a previous classifier which predicted oyster vessel behavior in the public oyster grounds of Louisiana. The SVM classifier predicts vessel behavior (docked, poling, fishing, or traveling) based on each vessel’s speed and either net speed or movement angle. The data from these vessels was recorded by a Vessel Monitoring System (VMS), and stored in a PostgreSQL database. The SVM classifier was written in Python, using the scikit-learn library, and was trained by using predictions from the previous classifier. Several validation and parameter optimization techniques were used to improve the SVM classifier’s accuracy. The previous classifier could classify about 93% of points from July 2013 to August 2014, but the SVM classifier can classify about 99.7% of those points. This new classifier can easily be expanded with additional features to further improve its predictive capabilities.
|
290 |
Embodied Visual Object Recognition / Förkroppsligad objektigenkänningWallenberg, Marcus January 2017 (has links)
Object recognition is a skill we as humans often take for granted. Due to our formidable object learning, recognition and generalisation skills, it is sometimes hard to see the multitude of obstacles that need to be overcome in order to replicate this skill in an artificial system. Object recognition is also one of the classical areas of computer vision, and many ways of approaching the problem have been proposed. Recently, visually capable robots and autonomous vehicles have increased the focus on embodied recognition systems and active visual search. These applications demand that systems can learn and adapt to their surroundings, and arrive at decisions in a reasonable amount of time, while maintaining high object recognition performance. This is especially challenging due to the high dimensionality of image data. In cases where end-to-end learning from pixels to output is needed, mechanisms designed to make inputs tractable are often necessary for less computationally capable embodied systems.Active visual search also means that mechanisms for attention and gaze control are integral to the object recognition procedure. Therefore, the way in which attention mechanisms should be introduced into feature extraction and estimation algorithms must be carefully considered when constructing a recognition system.This thesis describes work done on the components necessary for creating an embodied recognition system, specifically in the areas of decision uncertainty estimation, object segmentation from multiple cues, adaptation of stereo vision to a specific platform and setting, problem-specific feature selection, efficient estimator training and attentional modulation in convolutional neural networks. Contributions include the evaluation of methods and measures for predicting the potential uncertainty reduction that can be obtained from additional views of an object, allowing for adaptive target observations. Also, in order to separate a specific object from other parts of a scene, it is often necessary to combine multiple cues such as colour and depth in order to obtain satisfactory results. Therefore, a method for combining these using channel coding has been evaluated. In order to make use of three-dimensional spatial structure in recognition, a novel stereo vision algorithm extension along with a framework for automatic stereo tuning have also been investigated. Feature selection and efficient discriminant sampling for decision tree-based estimators have also been implemented. Finally, attentional multi-layer modulation of convolutional neural networks for recognition in cluttered scenes has been evaluated. Several of these components have been tested and evaluated on a purpose-built embodied recognition platform known as Eddie the Embodied. / Embodied Visual Object Recognition / FaceTrack
|
Page generated in 0.08 seconds