• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7433
  • 1103
  • 1048
  • 794
  • 476
  • 291
  • 237
  • 184
  • 90
  • 81
  • 63
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14406
  • 9224
  • 3943
  • 2366
  • 1924
  • 1915
  • 1721
  • 1624
  • 1513
  • 1439
  • 1373
  • 1354
  • 1341
  • 1275
  • 1269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Pedestrian flow measurement using image processing techniques

Zhang, Xiaowei January 1999 (has links)
No description available.
662

Using machine-learning to efficiently explore the architecture/compiler co-design space

Dubach, Christophe January 2009 (has links)
Designing new microprocessors is a time consuming task. Architects rely on slow simulators to evaluate performance and a significant proportion of the design space has to be explored before an implementation is chosen. This process becomes more time consuming when compiler optimisations are also considered. Once the architecture is selected, a new compiler must be developed and tuned. What is needed are techniques that can speedup this whole process and develop a new optimising compiler automatically. This thesis proposes the use of machine-learning techniques to address architecture/compiler co-design. First, two performance models are developed and are used to efficiently search the design space of amicroarchitecture. These models accurately predict performance metrics such as cycles or energy, or a tradeoff of the two. The first model uses just 32 simulations to model the entire design space of new applications, an order of magnitude fewer than state-of-the-art techniques. The second model addresses offline training costs and predicts the average behaviour of a complete benchmark suite. Compared to state-of-the-art, it needs five times fewer training simulations when applied to the SPEC CPU 2000 and MiBench benchmark suites. Next, the impact of compiler optimisations on the design process is considered. This has the potential to change the shape of the design space and improve performance significantly. A new model is proposed that predicts the performance obtainable by an optimising compiler for any design point, without having to build the compiler. Compared to the state-of-the-art, this model achieves a significantly lower error rate. Finally, a new machine-learning optimising compiler is presented that predicts the best compiler optimisation setting for any new program on any new microarchitecture. It achieves an average speedup of 1.14x over the default best gcc optimisation level. This represents 61% of the maximum speedup available, using just one profile run of the application.
663

Introducing corpus-based rules and algorithms in a rule-based machine translation system

Dugast, Loic January 2013 (has links)
Machine translation offers the challenge of automatically translating a text from one natural language into another. Statistical methods - originating from the field of information theory - have shown to be a major breakthrough in the field of machine translation. Prior to this paradigm, many systems had been developed following a rule-based approach. This denotes a system based on a linguistic description of the languages involved and of how translation occurs in the mind of the (human) translator. Statistical models on the contrary use empirical means and may work with very little linguistic hypothesis on language and translation as performed by humans. This had implications for rule-based translation systems, in terms of software architecture and the nature of the rules, which were manually input and lack any statistical feature. In the view of such diverging paradigms, we can imagine trying to combine both in a hybrid system. In the present work, we start by examining the state-of-the-art of both rule-based and statistical systems. We restrict the rule-based approach to transfer-based systems. We compare rule-based and statistical paradigms in terms of global translation quality and give a qualitative analysis of their respective specific errors. We also introduce initial black-box hybrid models that confirm there is an expected gain in combining the two approaches. Motivated by the qualitative analysis, we focus our study and experiments on lexical phrasal rules. We propose a setup allowing to extract such resources from corpora. Going one step further in the integration of rule-based and statistical approaches, we then examine how to combine the extracted rules with decoding modules that will allow for a corpus-based handling of ambiguity. This then leads to the final delivery of this work: a rule-based system for which we can learn non-deterministic rules from corpora, and whose decoder can be optimised on a tuning set in the same domain.
664

Quantifying the Trenches: Machine Learning Applied to NFL Offensive Lineman Valuation

Pyne, Sean 01 January 2017 (has links)
There are 32 teams in the National Football League all competing to be the best by creating the strongest roster possible. The problem of evaluating talent has created extreme competition between teams in the form of a rookie draft and a fiercely competitive veteran free agent market. The difficulty with player evaluation is due to the noise associated with measuring a particular player’s value. The intent of this paper is to create an algorithm for identifying the inefficiencies in pricing in these player markets. In particular, this paper focuses on the veteran free agent market for offensive linemen in the NFL. NFL offensive linemen are difficult to evaluate empirically because of the significant amount of noise present due to an inability to measure a lineman’s performance directly. The algorithm first uses a machine learning technique, k-means cluster analysis, to generate a comparative set of offensive lineman. Then using that set of comparable offensive linemen, the algorithm flags any lineman that vary significantly in earnings from their peers. It is in this fashion that the algorithm provides relative valuations for particular offensive lineman.
665

An evaluation of the accuracy of community-based automated blood pressure machines

Vogel, Elisa, Bowen, Shannon January 2010 (has links)
Class of 2010 Abstract / OBJECTIVES: The purpose of this study was to evaluate the accuracy of automated blood pressure machines located within community-based pharmacies. METHODS: A descriptive, prospective study was performed comparing blood pressure readings obtained from community-based automated blood pressure machines to readings from a mercury manometer for 2 different arm sizes. Mercury manometer readings were obtained using the standardized technique and a standard cuff recommended by the American Heart Association RESULTS: For the subject with the small arm size, the automated blood pressure machines reported systolic pressure readings that were, on average, 16.1 mmHg higher than those obtained manually by the researcher. The mean systolic and pressure readings for the subject with the medium arm size were not significantly different between the automated machine and manual manometer readings, and the diastolic pressure readings were modestly different. CONCLUSIONS: We found that automated blood pressure machines located within a sample of representative community pharmacies were neither accurate nor reliable. The accuracy of the readings are especially inaccurate for subjects with a smaller than average arm size.
666

Online intrusion detection design and implementation for SCADA networks

Wang, Hongrui 25 April 2017 (has links)
The standardization and interconnection of supervisory control and data acquisition (SCADA) systems has exposed the systems to cyber attacks. To improve the security of the SCADA systems, intrusion detection system (IDS) design is an effective method. However, traditional IDS design in the industrial networks mainly exploits the prede fined rules, which needs to be complemented and developed to adapt to the big data scenario. Therefore, this thesis aims to design an anomaly-based novel hierarchical online intrusion detection system (HOIDS) for SCADA networks based on machine learning algorithms theoretically and implement the theoretical idea of the anomaly-based intrusion detection on a testbed. The theoretical design of HOIDS by utilizing the server-client topology while keeping clients distributed for global protection, high detection rate is achieved with minimum network impact. We implement accurate models of normal-abnormal binary detection and multi-attack identification based on logistic regression and quasi-Newton optimization algorithm using the Broyden-Fletcher-Goldfarb-Shanno approach. The detection system is capable of accelerating detection by information gain based feature selection or principle component analysis based dimension reduction. By evaluating our system using the KDD99 dataset and the industrial control system datasets, we demonstrate that our design is highly scalable, e fficient and cost effective for securing SCADA infrastructures. Besides the theoretical IDS design, a testbed is modi ed and implemented for SCADA network security research. It simulates the working environment of SCADA systems with the functions of data collection and analysis for intrusion detection. The testbed is implemented to be more flexible and extensible compared to the existing related work on the testbeds. In the testbed, Bro network analyzer is introduced to support the research of anomaly-based intrusion detection. The procedures of both signature-based intrusion detection and anomaly-based intrusion detection using Bro analyzer are also presented. Besides, a generic Linux-based host is used as the container of different network functions and a human machine interface (HMI) together with the supervising network is set up to simulate the control center. The testbed does not implement a large number of traffic generation methods, but still provides useful examples of generating normal and abnormal traffic. Besides, the testbed can be modi ed or expanded in the future work about SCADA network security. / Graduate
667

Domain adaptation for classifying disaster-related Twitter data

Sopova, Oleksandra January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Machine learning is the subfield of Artificial intelligence that gives computers the ability to learn without being explicitly programmed, as it was defined by Arthur Samuel - the American pioneer in the field of computer gaming and artificial intelligence who was born in Emporia, Kansas. Supervised Machine Learning is focused on building predictive models given labeled training data. Data may come from a variety of sources, for instance, social media networks. In our research, we use Twitter data, specifically, user-generated tweets about disasters such as floods, hurricanes, terrorist attacks, etc., to build classifiers that could help disaster management teams identify useful information. A supervised classifier trained on data (training data) from a particular domain (i.e. disaster) is expected to give accurate predictions on unseen data (testing data) from the same domain, assuming that the training and test data have similar characteristics. Labeled data is not easily available for a current target disaster. However, labeled data from a prior source disaster is presumably available, and can be used to learn a supervised classifier for the target disaster. Unfortunately, the source disaster data and the target disaster data may not share the same characteristics, and the classifier learned from the source may not perform well on the target. Domain adaptation techniques, which use unlabeled target data in addition to labeled source data, can be used to address this problem. We study single-source and multi-source domain adaptation techniques, using Nave Bayes classifier. Experimental results on Twitter datasets corresponding to six disasters show that domain adaptation techniques improve the overall performance as compared to basic supervised learning classifiers. Domain adaptation is crucial for many machine learning applications, as it enables the use of unlabeled data in domains where labeled data is not available.
668

Predicting the concentration of residual methanol in industrial formalin using machine learning / Forutspå koncentrationen av resterande metanol i industriell formalin med hjälp av maskininlärning

Heidkamp, William January 2016 (has links)
In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to provide the best results and was used to create the predictive model. The model was compiled to a stand-alone application with a graphical user interface using the MATLAB CompilerTM.
669

A Machine Learning Approach to Determine Oyster Vessel Behavior

Frey, Devin 16 December 2016 (has links)
A support vector machine (SVM) classifier was designed to replace a previous classifier which predicted oyster vessel behavior in the public oyster grounds of Louisiana. The SVM classifier predicts vessel behavior (docked, poling, fishing, or traveling) based on each vessel’s speed and either net speed or movement angle. The data from these vessels was recorded by a Vessel Monitoring System (VMS), and stored in a PostgreSQL database. The SVM classifier was written in Python, using the scikit-learn library, and was trained by using predictions from the previous classifier. Several validation and parameter optimization techniques were used to improve the SVM classifier’s accuracy. The previous classifier could classify about 93% of points from July 2013 to August 2014, but the SVM classifier can classify about 99.7% of those points. This new classifier can easily be expanded with additional features to further improve its predictive capabilities.
670

Embodied Visual Object Recognition / Förkroppsligad objektigenkänning

Wallenberg, Marcus January 2017 (has links)
Object recognition is a skill we as humans often take for granted. Due to our formidable object learning, recognition and generalisation skills, it is sometimes hard to see the multitude of obstacles that need to be overcome in order to replicate this skill in an artificial system. Object recognition is also one of the classical areas of computer vision, and many ways of approaching the problem have been proposed. Recently, visually capable robots and autonomous vehicles have increased the focus on embodied recognition systems and active visual search. These applications demand that systems can learn and adapt to their surroundings, and arrive at decisions in a reasonable amount of time, while maintaining high object recognition performance. This is especially challenging due to the high dimensionality of image data. In cases where end-to-end learning from pixels to output is needed, mechanisms designed to make inputs tractable are often necessary for less computationally capable embodied systems.Active visual search also means that mechanisms for attention and gaze control are integral to the object recognition procedure. Therefore, the way in which attention mechanisms should be introduced into feature extraction and estimation algorithms must be carefully considered when constructing a recognition system.This thesis describes work done on the components necessary for creating an embodied recognition system, specifically in the areas of decision uncertainty estimation, object segmentation from multiple cues, adaptation of stereo vision to a specific platform and setting, problem-specific feature selection, efficient estimator training and attentional modulation in convolutional neural networks. Contributions include the evaluation of methods and measures for predicting the potential uncertainty reduction that can be obtained from additional views of an object, allowing for adaptive target observations. Also, in order to separate a specific object from other parts of a scene, it is often necessary to combine multiple cues such as colour and depth in order to obtain satisfactory results. Therefore, a method for combining these using channel coding has been evaluated. In order to make use of three-dimensional spatial structure in recognition, a novel stereo vision algorithm extension along with a framework for automatic stereo tuning have also been investigated. Feature selection and efficient discriminant sampling for decision tree-based estimators have also been implemented. Finally, attentional multi-layer modulation of convolutional neural networks for recognition in cluttered scenes has been evaluated. Several of these components have been tested and evaluated on a purpose-built embodied recognition platform known as Eddie the Embodied. / Embodied Visual Object Recognition / FaceTrack

Page generated in 0.0484 seconds