• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 19
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Machine Learning Multi-Stage Classification and Regression in the Search for Vector-like Quarks and the Neyman Construction in Signal Searches

Leone, Robert Matthew, Leone, Robert Matthew January 2016 (has links)
A search for vector-like quarks (VLQs) decaying to a Z boson using multi-stage machine learning was compared to a search using a standard square cuts search strategy. VLQs are predicted by several new theories beyond the Standard Model. The searches used 20.3 inverse femtobarns of proton-proton collisions at a center-of-mass energy of 8 TeV collected with the ATLAS detector in 2012 at the CERN Large Hadron Collider. CLs upper limits on production cross sections of vector-like top and bottom quarks were computed for VLQs produced singly or in pairs, Tsingle, Bsingle, Tpair, and Bpair. The two stage machine learning classification search strategy did not provide any improvement over the standard square cuts strategy, but for Tpair, Bpair, and Tsingle, a third stage of machine learning regression was able to lower the upper limits of high signal masses by as much as 50%. Additionally, new test statistics were developed for use in the Neyman construction of confidence regions in order to address deficiencies in current frequentist methods, such as the generation of empty set confidence intervals. A new method for treating nuisance parameters was also developed that may provide better coverage properties than current methods used in particle searches. Finally, significance ratio functions were derived that allow a more nuanced interpretation of the evidence provided by measurements than is given by confidence intervals alone.
12

Using Peak Intensity and Fragmentation Patterns in Peptide SeQuence IDentification (SQID) - A Bayesian Learning Algorithm for Tandem Mass Spectra

Ji, Li January 2006 (has links)
As DNA sequence information becomes increasingly available, researchers are now tackling the great challenge of characterizing and identifying peptides and proteins from complex mixtures. Automatic database searching algorithms have been developed to meet this challenge. This dissertation is aimed at improving these algorithms to achieve more accurate and efficient peptide and protein identification with greater confidence by incorporating peak intensity information and peptide cleavage patterns obtained in gas-phase ion dissociation research. The underlying hypothesis is that these algorithms can benefit from knowledge about molecular level fragmentation behavior of particular amino acid residues or residue combinations.SeQuence IDentification (SQID), developed in this dissertation research, is a novel Bayesian learning-based method that attempts to incorporate intensity information from peptide cleavage patterns in a database searching algorithm. It directly makes use of the estimated peak intensity distributions for cleavage at amino acid pairs, derived from probability histograms generated from experimental MS/MS spectra. Rather than assuming amino acid cleavage patterns artificially or disregarding intensity information, SQID aims to take advantage of knowledge of observed fragmentation intensity behavior. In addition, SQID avoids the generation of a theoretical spectrum predication for each candidate sequence, needed by other sequencing methods including SEQUEST. As a result, computational efficiency is significantly improved.Extensive testing has been performed to evaluate SQID, by using datasets from the Pacific Northwest National Laboratory, University of Colorado, and the Institute for Systems Biology. The computational results show that by incorporating peak intensity distribution information, the program's ability to distinguish the correct peptides from incorrect matches is greatly enhanced. This observation is consistent with experiments involving various peptides and searches against larger databases with distraction proteins, which indirectly verifies that peptide dissociation behaviors determine the peptide sequencing and protein identification in MS/MS. Furthermore, testing SQID by using previously identified clusters of spectra associated with unique chemical structure motifs leads to the following conclusions: (1) the improvement in identification confidence is observed with a range of peptides displaying different fragmentation behaviors; (2) the magnitude of improvement is in agreement with the peptide cleavage selectivity, that is, more significant improvements are observed with more selective peptide cleavages.
13

An Ensemble Method for Large Scale Machine Learning with Hadoop MapReduce

Liu, Xuan 25 March 2014 (has links)
We propose a new ensemble algorithm: the meta-boosting algorithm. This algorithm enables the original Adaboost algorithm to improve the decisions made by different WeakLearners utilizing the meta-learning approach. Better accuracy results are achieved since this algorithm reduces both bias and variance. However, higher accuracy also brings higher computational complexity, especially on big data. We then propose the parallelized meta-boosting algorithm: Parallelized-Meta-Learning (PML) using the MapReduce programming paradigm on Hadoop. The experimental results on the Amazon EC2 cloud computing infrastructure show that PML reduces the computation complexity enormously while retaining lower error rates than the results on a single computer. As we know MapReduce has its inherent weakness that it cannot directly support iterations in an algorithm, our approach is a win-win method, since it not only overcomes this weakness, but also secures good accuracy performance. The comparison between this approach and a contemporary algorithm AdaBoost.PL is also performed.
14

Autopoietic approach to cultural transmission

Papadopoulos-Korfiatis, Alexandros January 2017 (has links)
Non-representational cognitive science is a promising research field that provides an alternative to the view of the brain as a “computer” filled with symbolic representations of the world and cognition as “calculations” performed on those symbols. Autopoiesis is a biological, bottom-up, non-representational theory of cognition, in which representations and meaning are framed as explanatory concepts that are constituted in an observer’s description of a cognitive system, not operational concepts in the system itself. One of the problems of autopoiesis, and all non-representational theories, is that they struggle with scaling up to high-level cognitive behaviour such as language. The Iterated Learning Model is a theory of language evolution that shows that certain features of language are explained not because of something happening in the linguistic agent’s brain, but as the product of the evolution of the linguistic system itself under the pressures of learnability and expressivity. Our goal in this work is to combine an autopoietic approach with the cultural transmission chains that the ILM uses, in order to provide the first step in an autopoietic explanation of the evolution of language. In order to do that, we introduce a simple, joint action physical task in which agents are rewarded for dancing around each other in either of two directions, left or right. The agents are simulated e-pucks, with continuous-time recurrent neural networks as nervous systems. First, we adapt a biologically plausible reinforcement learning algorithm based on spike-timing dependent plasticity tagging and dopamine reward signals. We show that, using this algorithm, our agents can successfully learn the left/right dancing task and examine how learning time influences the agents’ task success rates. Following that, we link individual learning episodes in cultural transmission chains and show that an expert agent’s initial behaviour is successfully transmitted in long chains. We investigate the conditions under which these transmission chains break down, as well as the emergence of behaviour in the absence of expert agents. By using long transmission chains, we look at the boundary conditions for the re-establishment of transmitted behaviour after chain breakdowns. Bringing all the above experiments together, we discuss their significance for non-representational cognitive science and draw some interesting parallels to existing Iterated Learning research; finally, we close by putting forward a number of ideas for additions and future research directions.
15

Investigation on integration of sustainable manufacturing and mathematical programming for technology selection and capacity planning

Nejadi, Fahimeh January 2016 (has links)
Concerns about energy supply and climate change have been driving companies towards more sustainable manufacturing while they are looking on the economic side as well. One practicable task to achieve sustainability in manufacturing is choosing more sustainable technologies among available technologies. Combination of two functions of ‘Technology Selection’ and ‘Capacity Planning’ is not usually addressed in the research literature. The importance of integrated decisions on technology selection and capacity planning at such strategic level is therefore essentially important. This is supported by justifications in some selected manufacturing areas particularly concerning economies of the scale and accumulated knowledge. Furthermore, manufacturing firms are working in a global competitive environment that is changing in a continuous way. Strategic design of systems under such circumstances requires a carefully modelled approach to deal with the complexity of uncertainties. The overall project aims are to develop an integrated methodological approach to solving the combined ‘technology selection’ and ‘capacity planning’ problems in manufacturing sector. The approach will also incorporate the multi-perspective concept of sustainability, while taking uncertainties into account. A framework consisting of four modules is proposed. Problem structuring module adopts an Ontology method to map the technology mix combinations and to capture input data. ‘Optimisation for Sustainable Manufacturing’ module addresses the optimisation of technology selection and capacity planning decisions in an integrated way using Goal, Mixed Integer Programming method. The model developed takes the multi-criteria aspect of sustainability development into account. Three criteria, namely a) Environmental (e.g. Energy consumption and Emissions), b) Economics, and c) Technical (e.g. Quality) are involved. ‘Normalisation algorithm by comparison with the best value’ method is adopted in this research in order to facilitate a systematic comparison among various criteria. The economic evaluation is based on ‘Life-Cycle Analysis’ approach. The ‘Present Value (PV)’ method is adopted to address ‘Time Value of Money’, while taking both ‘Inflation’ and ‘Market Return’ into account in order to make the proposed model more realistic. A mathematical model to represent the total PV of each technology investment, including both capital and running costs, is developed. ‘Sensitivity Analysis’ module addresses the uncertainty element of the problem. A controlled set of re-optimisation runs, which is guided by a tool coded in Visual Basic for Applications (VBA), is developed to perform intensive sensitivity analyses. It is aimed to deal with the uncertainty element of the problem. Within ‘Solution Structuring’ module, two knowledge structuring schemes, namely Decision Tree and Interactive Slider Diagram, are proposed to deal with the large size of solution sets generated by the “Sensitivity Analysis” module. An innovative, hybrid, Supervised and Unsupervised Machine Learning algorithm is developed to generate a decision tree that aims to structure the solution set. The unsupervised learning stage is implemented using DBSCAN algorithm, while the supervised learning element adopts C4.5 algorithm. The methodological approach is tested and validated using an exemplar case study on coating processes in an automotive company. The case is characterised by three operations, twelve possible technology mix states, both capital budget and environmental limits, and 243 different sensitivity analysis experiments. The painting systems are evaluated and compared based on their quality, technology life-cycle costs, and their potential VOC (Volatile Organic Compounds) emissions into the air.
16

An Ensemble Method for Large Scale Machine Learning with Hadoop MapReduce

Liu, Xuan January 2014 (has links)
We propose a new ensemble algorithm: the meta-boosting algorithm. This algorithm enables the original Adaboost algorithm to improve the decisions made by different WeakLearners utilizing the meta-learning approach. Better accuracy results are achieved since this algorithm reduces both bias and variance. However, higher accuracy also brings higher computational complexity, especially on big data. We then propose the parallelized meta-boosting algorithm: Parallelized-Meta-Learning (PML) using the MapReduce programming paradigm on Hadoop. The experimental results on the Amazon EC2 cloud computing infrastructure show that PML reduces the computation complexity enormously while retaining lower error rates than the results on a single computer. As we know MapReduce has its inherent weakness that it cannot directly support iterations in an algorithm, our approach is a win-win method, since it not only overcomes this weakness, but also secures good accuracy performance. The comparison between this approach and a contemporary algorithm AdaBoost.PL is also performed.
17

Analysis of Classifier Weaknesses Based on Patterns and Corrective Methods

Skapura, Nicholas 18 May 2021 (has links)
No description available.
18

Creative Adaptation through Learning / Adaptation Créative par Apprentissage

Cully, Antoine 21 December 2015 (has links)
Les robots ont profondément transformé l’industrie manufacturière et sont susceptibles de délivrer de grands bénéfices pour la société, par exemple en intervenant sur des lieux de catastrophes naturelles, lors de secours à la personne ou dans le cadre de la santé et des transports. Ce sont aussi des outils précieux pour la recherche scientifique, comme pour l’exploration des planètes ou des fonds marins. L’un des obstacles majeurs à leur utilisation en dehors des environnements parfaitement contrôlés des usines ou des laboratoires, est leur fragilité. Alors que les animaux peuvent rapidement s’adapter à des blessures, les robots actuels ont des difficultés à faire preuve de créativité lorsqu’ils doivent surmonter un problème inattendu: ils sont limités aux capteurs qu’ils embarquent et ne peuvent diagnostiquer que les situations qui ont été anticipées par leur concepteurs. Dans cette thèse, nous proposons une approche différente qui consiste à laisser le robot apprendre de lui-même un comportement palliant la panne. Cependant, les méthodes actuelles d’apprentissage sont lentes même lorsque l’espace de recherche est petit et contraint. Pour surmonter cette limitation et permettre une adaptation rapide et créative, nous combinons la créativité des algorithmes évolutionnistes avec la rapidité des algorithmes de recherche de politique à travers trois contributions : les répertoires comportementaux, l’adaptation aux dommages et le transfert de connaissance entre plusieurs tâches. D’une manière générale, ces travaux visent à apporter les fondations algorithmiques permettant aux robots physiques d’être plus robustes, performants et autonomes. / Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, for example in search and rescue, disaster response, health care, and transportation. They are also invaluable tools for scientific exploration of distant planets or deep oceans. A major obstacle to their widespread adoption in more complex environments and outside of factories is their fragility. While animals can quickly adapt to injuries, current robots cannot “think outside the box” to find a compensatory behavior when they are damaged: they are limited to their pre-specified self-sensing abilities, which can diagnose only anticipated failure modes and strongly increase the overall complexity of the robot. In this thesis, we propose a different approach that considers having robots learn appropriate behaviors in response to damage. However, current learning techniques are slow even with small, constrained search spaces. To allow fast and creative adaptation, we combine the creativity of evolutionary algorithms with the learning speed of policy search algorithms through three contributions: the behavioral repertoires, the damage recovery using these repertoires and the transfer of knowledge across tasks. Globally, this work aims to provide the algorithmic foundations that will allow physical robots to be more robust, effective and autonomous.
19

Computer Vision and Building Envelopes

Anani-Manyo, Nina K. 29 April 2021 (has links)
No description available.
20

ANOMALY DETECTION USING MACHINE LEARNING FORINTRUSION DETECTION

Vaishnavi Rudraraju (18431880) 02 May 2024 (has links)
<p dir="ltr">This thesis examines machine learning approaches for anomaly detection in network security, particularly focusing on intrusion detection using TCP and UDP protocols. It uses logistic regression models to effectively distinguish between normal and abnormal network actions, demonstrating a strong ability to detect possible security concerns. The study uses the UNSW-NB15 dataset for model validation, allowing a thorough evaluation of the models' capacity to detect anomalies in real-world network scenarios. The UNSW-NB15 dataset is a comprehensive network attack dataset frequently used in research to evaluate intrusion detection systems and anomaly detection algorithms because of its realistic attack scenarios and various network activities.</p><p dir="ltr">Further investigation is carried out using a Multi-Task Neural Network built for binary and multi-class classification tasks. This method allows for the in-depth study of network data, making it easier to identify potential threats. The model is fine-tuned during successive training epochs, focusing on validation measures to ensure its generalizability. The thesis also applied early stopping mechanisms to enhance the ML model, which helps optimize the training process, reduces the risk of overfitting, and improves the model's performance on new, unseen data.</p><p dir="ltr">This thesis also uses blockchain technology to track model performance indicators, a novel strategy that improves data integrity and reliability. This blockchain-based logging system keeps an immutable record of the models' performance over time, which helps to build a transparent and verifiable anomaly detection framework.</p><p dir="ltr">In summation, this research enhances Machine Learning approaches for network anomaly detection. It proposes scalable and effective approaches for early detection and mitigation of network intrusions, ultimately improving the security posture of network systems.</p>

Page generated in 0.0564 seconds