• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5703
  • 581
  • 289
  • 275
  • 167
  • 157
  • 84
  • 66
  • 51
  • 44
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9207
  • 9207
  • 3060
  • 1710
  • 1544
  • 1541
  • 1447
  • 1388
  • 1220
  • 1209
  • 1190
  • 1135
  • 1126
  • 1051
  • 1041
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

SRML: Space Radio Machine Learning

Ferreira, Paulo Victor Rodrigues 27 April 2017 (has links)
Space-based communications systems to be employed by future artificial satellites, or spacecraft during exploration missions, can potentially benefit from software-defined radio adaptation capabilities. Multiple communication requirements could potentially compete for radio resources, whose availability of which may vary during the spacecraft's operational life span. Electronic components are prone to failure, and new instructions will eventually be received through software updates. Consequently, these changes may require a whole new set of near-optimal combination of parameters to be derived on-the-fly without instantaneous human interaction or even without a human in-the-loop. Thus, achieving a sufficiently set of radio parameters can be challenging, especially when the communication channels change dynamically due to orbital dynamics as well as atmospheric and space weather-related impairments. This dissertation presents an analysis and discussion regarding novel algorithms proposed in order to enable a cognition control layer for adaptive communication systems operating in space using an architecture that merges machine learning techniques employing wireless communication principles. The proposed cognitive engine proof-of-concept reasons over time through an efficient accumulated learning process. An implementation of the conceptual design is expected to be delivered to the SDR system located on the International Space Station as part of an experimental program. To support the proposed cognitive engine algorithm development, more realistic satellite-based communications channels are proposed along with rain attenuation synthesizers for LEO orbits, channel state detection algorithms, and multipath coefficients function of the reflector's electrical characteristics. The achieved performance of the proposed solutions are compared with the state-of-the-art, and novel performance benchmarks are provided for future research to reference.
312

Change-points Estimation in Statistical Inference and Machine Learning Problems

Zhang, Bingwen 14 August 2017 (has links)
"Statistical inference plays an increasingly important role in science, finance and industry. Despite the extensive research and wide application of statistical inference, most of the efforts focus on uniform models. This thesis considers the statistical inference in models with abrupt changes instead. The task is to estimate change-points where the underlying models change. We first study low dimensional linear regression problems for which the underlying model undergoes multiple changes. Our goal is to estimate the number and locations of change-points that segment available data into different regions, and further produce sparse and interpretable models for each region. To address challenges of the existing approaches and to produce interpretable models, we propose a sparse group Lasso (SGL) based approach for linear regression problems with change-points. Then we extend our method to high dimensional nonhomogeneous linear regression models. Under certain assumptions and using a properly chosen regularization parameter, we show several desirable properties of the method. We further extend our studies to generalized linear models (GLM) and prove similar results. In practice, change-points inference usually involves high dimensional data, hence it is prone to tackle for distributed learning with feature partitioning data, which implies each machine in the cluster stores a part of the features. One bottleneck for distributed learning is communication. For this implementation concern, we design communication efficient algorithm for feature partitioning data sets to speed up not only change-points inference but also other classes of machine learning problem including Lasso, support vector machine (SVM) and logistic regression."
313

Computer Vision and Machine Learning for Autonomous Vehicles

Chen, Zhilu 22 October 2017 (has links)
"Autonomous vehicle is an engineering technology that can improve transportation safety, alleviate traffic congestion and reduce carbon emissions. Research on autonomous vehicles can be categorized by functionality, for example, object detection or recognition, path planning, navigation, lane keeping, speed control and driver status monitoring. The research topics can also be categorized by the equipment or techniques used, for example, image processing, computer vision, machine learning, and localization. This dissertation primarily reports on computer vision and machine learning algorithms and their implementations for autonomous vehicles. The vision-based system can effectively detect and accurately recognize multiple objects on the road, such as traffic signs, traffic lights, and pedestrians. In addition, an autonomous lane keeping system has been proposed using end-to-end learning. In this dissertation, a road simulator is built using data collection and augmentation, which can be used for training and evaluating autonomous driving algorithms. The Graphic Processing Unit (GPU) based traffic sign detection and recognition system can detect and recognize 48 traffic signs. The implementation has three stages: pre-processing, feature extraction, and classification. A highly optimized and parallelized version of Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is used. The system can process 27.9 frames per second with the active pixels of a 1,628 by 1,236 resolution, and with the minimal loss of accuracy. In an evaluation using the BelgiumTS dataset, the experimental results indicate that the detection rate is about 91.69% with false positives per window of 3.39e-5, and the recognition rate is about 93.77%. We report on two traffic light detection and recognition systems. The first system detects and recognizes red circular lights only, using image processing and SVM. Its performance is better than that of traditional detectors and it achieves the best performance with 96.97% precision and 99.43% recall. The second system is more complicated. It detects and classifies different types of traffic lights, including green and red lights in both circular and arrow forms. In addition, it employs image processing techniques, such as color extraction and blob detection to locate the candidates. Subsequently, a pre-trained PCA network is used as a multi-class classifier for obtaining frame-by-frame results. Furthermore, an online multi-object tracking technique is applied to overcome occasional misses and a forecasting method is used to filter out false positives. Several additional optimization techniques are employed to improve the detector performance and to handle the traffic light transitions. A multi-spectral data collection system is implemented for pedestrian detection, which includes a thermal camera and a pair of stereo color cameras. The three cameras are first aligned using trifocal tensor, and the aligned data are processed by using computer vision and machine learning techniques. Convolutional channel features (CCF) and the traditional HOG+SVM approach are evaluated over the data captured from the three cameras. Through the use of trifocal tensor and CCF, training becomes more efficient. The proposed system achieves only a 9% log-average miss rate on our dataset. Autonomous lane keeping system employs an end- to-end learning approach for obtaining the proper steering angle for maintaining a car in a lane. The convolutional neural network (CNN) model uses raw image frames as input, and it outputs the steering angles corresponding to the input frames. Unlike the traditional approach, which manually decomposes the problem into several parts, such as lane detection, path planning, and steering control, the model learns to extract useful features on its own and learns to steer from human behavior. More importantly, we find that having a simulator for data augmentation and evaluation is important. We then build the simulator using image projection, vehicle dynamics, and vehicle trajectory tracking. The test results reveal that the model trained with augmented data using the simulator has better performance and achieves about a 98% autonomous driving time on our dataset. Furthermore, a vehicle data collection system is developed for building our own datasets from recorded videos. These datasets are used in the above studies and have been released to the public for autonomous vehicle research. The experimental datasets are available at http://computing.wpi.edu/Dataset.html."
314

Layout Optimization for Distributed Relational Databases Using Machine Learning

Patvarczki, Jozsef 23 May 2012 (has links)
A common problem when running Web-based applications is how to scale-up the database. The solution to this problem usually involves having a smart Database Administrator determine how to spread the database tables out amongst computers that will work in parallel. Laying out database tables across multiple machines so they can act together as a single efficient database is hard. Automated methods are needed to help eliminate the time required for database administrators to create optimal configurations. There are four operators that we consider that can create a search space of possible database layouts: 1) denormalizing, 2) horizontally partitioning, 3) vertically partitioning, and 4) fully replicating. Textbooks offer general advice that is useful for dealing with extreme cases - for instance you should fully replicate a table if the level of insert to selects is close to zero. But even this seemingly obvious statement is not necessarily one that will lead to a speed up once you take into account that some nodes might be a bottle neck. There can be complex interactions between the 4 different operators which make it even more difficult to predict what the best thing to do is. Instead of using best practices to do database layout, we need a system that collects empirical data on when these 4 different operators are effective. We have implemented a state based search technique to try different operators, and then we used the empirically measured data to see if any speed up occurred. We recognized that the costs of creating the physical database layout are potentially large, but it is necessary since we want to know the "Ground Truth" about what is effective and under what conditions. After creating a dataset where these four different operators have been applied to make different databases, we can employ machine learning to induce rules to help govern the physical design of the database across an arbitrary number of computer nodes. This learning process, in turn, would allow the database placement algorithm to get better over time as it trains over a set of examples. What this algorithm calls for is that it will try to learn 1) "What is a good database layout for a particular application given a query workload?" and 2) "Can this algorithm automatically improve itself in making recommendations by using machine learned rules to try to generalize when it makes sense to apply each of these operators?" There has been considerable research done in parallelizing databases where large amounts of data are shipped from one node to another to answer a single query. Sometimes the costs of shipping the data back and forth might be high, so in this work we assume that it might be more efficient to create a database layout where each query can be answered by a single node. To make this assumption requires that all the incoming query templates are known beforehand. This requirement can easily be satisfied in the case of a Web-based application due to the characteristic that users typically interact with the system through a web interface such as web forms. In this case, unseen queries are not necessarily answerable, without first possibly reconstructing the data on a single machine. Prior knowledge of these exact query templates allows us to select the best possible database table placements across multiple nodes. But in the case of trying to improve the efficiency of a Web-based application, a web site provider might feel that they are willing to suffer the inconvenience of not being able to answer an arbitrary query, if they are in turn provided with a system that runs more efficiently.
315

An Embedded Seizure Onset Detection System

Kindle, Alexander Lawrence 12 September 2013 (has links)
"A combined hardware and software platform for ambulatory seizure onset detection is presented. The hardware is developed around commercial off-the-shelf components, featuring ADS1299 analog front ends for electroencephalography from Texas Instruments and a Broadcom ARM11 microcontroller for algorithm execution. The onset detection algorithm is a patient-specific support vector machine algorithm. It outperforms a state-of-the-art detector on a reference data set, with 100% sensitivity, 3.4 second average onset detection latency, and on average 1 false positive per 24 hours. The more comprehensive European Epilepsy Database is then evaluated, which highlights several real-world challenges for seizure onset detection, resulting in reduced average sensitivity of 93.5%, 5 second average onset detection latency, and 85.5% specificity. Algorithm enhancements to improve this reduced performance are proposed."
316

Adaptively-Halting RNN for Tunable Early Classification of Time Series

Hartvigsen, Thomas 11 November 2018 (has links)
Early time series classification is the task of predicting the class label of a time series before it is observed in its entirety. In time-sensitive domains where information is collected over time it is worth sacrificing some classification accuracy in favor of earlier predictions, ideally early enough for actions to be taken. However, since accuracy and earliness are contradictory objectives, a solution to this problem must find a task-dependent trade-off. There are two common state-of-the-art methods. The first involves an analyst selecting a timestep at which all predictions must be made. This does not capture earliness on a case-by-case basis, so if the selecting timestep is too early, all later signals are missed, and if a signal happens early, the classifier still waits to generate a prediction. The second method is the exhaustive search for signals, which encodes no timing information and is not scalable to high dimensions or long time series. We design the first early classification model called EARLIEST to tackle this multi-objective optimization problem, jointly learning (1) to decide at which time step to halt and generate predictions and (2) how to classify the time series. Each of these is learned based on the task and data features. We achieve an analyst-controlled balance between the goals of earliness and accuracy by pairing a recurrent neural network that learns to classify time series as a supervised learning task with a stochastic controller network that learns a halting-policy as a reinforcement learning task. The halting-policy dictates sequential decisions, one per timestep, of whether or not to halt the recurrent neural network and classify the time series early. This pairing of networks optimizes a global objective function that incorporates both earliness and accuracy. We validate our method via critical clinical prediction tasks in the MIMIC III database from the Beth Israel Deaconess Medical Center along with another publicly available time series classification dataset. We show that EARLIEST out-performs two state-of-the-art LSTM-based early classification methods. Additionally, we dig deeper into our model's performance using a synthetic dataset which shows that EARLIEST learns to halt when it observes signals without having explicit access to signal locations. The contributions of this work are three-fold. First, our method is the first neural network-based solution to early classification of time series, bringing the recent successes of deep learning to this problem. Second, we present the first reinforcement-learning based solution to the unsupervised nature of early classification, learning the underlying distributions of signals without access to this information through trial and error. Third, we propose the first joint-optimization of earliness and accuracy, allowing learning of complex relationships between these contradictory goals.
317

Semi-Autonomous Wheelchair Navigation With Statistical Context Prediction

Qiao, Junqing 30 May 2016 (has links)
"This research introduces the structure and elements of the system used to predict the user's interested location. The combination of DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm and GMM (Gaussian Mixture Model) algorithm is used to find locations where the user usually visits. In addition, the testing result of applying other clustering algorithms such as Gaussian Mixture model, Density Based clustering algorithm and K-means clustering algorithm on actual data are also shown as comparison. With having the knowledge of locations where the user usually visits, Discrete Bayesian Network is generated from the user's time-sequence location data. Combining the Bayesian Network, the user's current location and the time when the user left the other locations, the user's interested location can be predicted."
318

Deep Learning on Attributed Sequences

Zhuang, Zhongfang 02 August 2019 (has links)
Recent research in feature learning has been extended to sequence data, where each instance consists of a sequence of heterogeneous items with a variable length. However, in many real-world applications, the data exists in the form of attributed sequences, which is composed of a set of fixed-size attributes and variable-length sequences with dependencies between them. In the attributed sequence context, feature learning remains challenging due to the dependencies between sequences and their associated attributes. In this dissertation, we focus on analyzing and building deep learning models for four new problems on attributed sequences. First, we propose a framework, called NAS, to produce feature representations of attributed sequences in an unsupervised fashion. The NAS is capable of producing task independent embeddings that can be used in various mining tasks of attributed sequences. Second, we study the problem of deep metric learning on attributed sequences. The goal is to learn a distance metric based on pairwise user feedback. In this task, we propose a framework, called MLAS, to learn a distance metric that measures the similarity and dissimilarity between attributed sequence feedback pairs. Third, we study the problem of one-shot learning on attributed sequences. This problem is important for a variety of real-world applications ranging from fraud prevention to network intrusion detection. We design a deep learning framework OLAS to tackle this problem. Once the OLAS is trained, we can then use it to make predictions for not only the new data but also for entire previously unseen new classes. Lastly, we investigate the problem of attributed sequence classification with attention model. This is challenging that now we need to assess the importance of each item in each sequence considering both the sequence itself and the associated attributes. In this work, we propose a framework, called AMAS, to classify attributed sequences using the information from the sequences, metadata, and the computed attention. Our extensive experiments on real-world datasets demonstrate that the proposed solutions significantly improve the performance of each task over the state-of-the-art methods on attributed sequences.
319

A machine learning approach for plagiarism detection

Alsallal, M. January 2016 (has links)
Plagiarism detection is gaining increasing importance due to requirements for integrity in education. The existing research has investigated the problem of plagrarim detection with a varying degree of success. The literature revealed that there are two main methods for detecting plagiarism, namely extrinsic and intrinsic. This thesis has developed two novel approaches to address both of these methods. Firstly a novel extrinsic method for detecting plagiarism is proposed. The method is based on four well-known techniques namely Bag of Words (BOW), Latent Semantic Analysis (LSA), Stylometry and Support Vector Machines (SVM). The LSA application was fine-tuned to take in the stylometric features (most common words) in order to characterise the document authorship as described in chapter 4. The results revealed that LSA based stylometry has outperformed the traditional LSA application. Support vector machine based algorithms were used to perform the classification procedure in order to predict which author has written a particular book being tested. The proposed method has successfully addressed the limitations of semantic characteristics and identified the document source by assigning the book being tested to the right author in most cases. Secondly, the intrinsic detection method has relied on the use of the statistical properties of the most common words. LSA was applied in this method to a group of most common words (MCWs) to extract their usage patterns based on the transitivity property of LSA. The feature sets of the intrinsic model were based on the frequency of the most common words, their relative frequencies in series, and the deviation of these frequencies across all books for a particular author. The Intrinsic method aims to generate a model of author “style” by revealing a set of certain features of authorship. The model’s generation procedure focuses on just one author as an attempt to summarise aspects of an author’s style in a definitive and clear-cut manner. The thesis has also proposed a novel experimental methodology for testing the performance of both extrinsic and intrinsic methods for plagiarism detection. This methodology relies upon the CEN (Corpus of English Novels) training dataset, but divides that dataset up into training and test datasets in a novel manner. Both approaches have been evaluated using the well-known leave-one-out-cross-validation method. Results indicated that by integrating deep analysis (LSA) and Stylometric analysis, hidden changes can be identified whether or not a reference collection exists.
320

Fais Ce Qu'il Te Plaît... Mais Fais Le Comme Je L'aime : Amélioration des performances en crowdfunding par l’utilisation des catégories et des récits / Give It To Me Straight... The Way I Like It : Increasing Crowdfunding Performance Using Categories and Narratives

Sitruk, Jonathan 07 September 2018 (has links)
Cette thèse vise à fournir aux entrepreneurs une meilleure compréhension de la façon d'améliorer leur performance lors de la collecte de fonds auprès d’investisseurs. Les entrepreneurs ont des difficultés notoires à accéder aux ressources financières et au capital parce qu'ils souffrent d'un aléa de la nouveauté. Cette condition inhérente est due à leur manque de légitimité dans leur marché cible et conduit les investisseurs à les considérer comme intrinsèquement risqués. Les moyens de financement des entrepreneurs ont traditionnellement été l'épargne personnelle, la famille et les amis, les banques ou les investisseurs professionnels. Le financement participatif est apparu comme une alternative à ceux-ci et les chercheurs dans le domaine de la gestion et de l'entrepreneuriat ont pris un grand intérêt à comprendre ses facettes multiples. La majorité de la recherche sur le financement participatif s’est concentrée sur des éléments quantifiables que les investisseurs utilisent pour déterminer la qualité de la startup. Plus la qualité perçue est élevée, plus les investisseurs ont des chances d'investir. Cependant, en complément de ces éléments de qualité, et non abordés par la recherche jusqu’à présent, sont les éléments qualitatifs qui permettent aux projets d’être plus clairs aux yeux des bailleurs de fonds potentiels tout en transmettant des informations en accord avec les attentes de ces mêmes investisseurs. Cette thèse vise à explorer les stratégies que les entrepreneurs peuvent utiliser pour augmenter leur performance dans le financement participatif en comprenant comment les investisseurs donnent du sens aux projets et comment ils les évaluent étant donné la nature de la plateforme utilisée par l'entrepreneur. Cette thèse contribue aux littératures du crowdfunding, de la catégorisation et des plateformes. La thèse explore d'abord comment les entrepreneurs peuvent utiliser les catégories et les stratégies narratives comme des leviers stratégiques pour améliorer leur performance en abaissant le niveau d'ambiguïté de leur offre tout en alignant leurs stratégies narratives aux attentes de la plateforme qu'ils utilisent. Deuxièmement, cette dissertation empreinte un chemin relativement inexploré en fournissant une critique de la relation qui existe entre l’utilisation de plusieurs catégories, l'ambiguïté et la créativité. De plus, la théorie de la catégorisation est enrichie par une analyse approfondie de l'importance des réseaux sémantiques et des images dans le processus de création de sens (« sense
making ») en utilisant une approche empirique nouvelle. Les images sont d'un intérêt particulier étant donné qu'elles ont leur importance à l’origine de la théorie de la catégorisation. Elles sont également traitées par des moyens cognitifs différents de ceux des mots et sont d'une importance vitale dans le monde d'aujourd'hui. Enfin, cette thèse explore la relation entre les plateformes et les récits en théorisant que les premiers sont des types particuliers d'organisations dont l'identité est forgée par leurs parties prenantes internes et externes. L’identité d’une plateforme est vulnérable aux changements tels que les chocs exogènes. Les entrepreneurs doivent apprendre à identifier ces identités ainsi que les changements potentiels afin d'adapter leurs stratégies narratives dans l’espoir d’augmenter leur performance. / This dissertation aims to provide entrepreneurs with a better understanding of how to improve their performance when raising funds from investors. Entrepreneurs have difficulty accessing financial resources and capital because they suffer from a liability of newness. This inherent condition is due to their lack of legitimacy in their target market and leads investors to see them as inherently risky. The traditional means of financing new venture ideas have been through personal savings, family and friends, banks, or professional investors. Crowdfunding has emerged as an alternative to these and scholars in the field of management and entrepreneurship have taken great interest in understanding its multiple facets. Most research in crowdfunding has focused on quantifiable elements that investors use in order to determine the quality of an entrepreneur’s venture. The higher the perceived quality, the higher the likelihood investors have of investing in it. However, orthogonal to these elements of quality, and not addressed in current research, are those qualitative elements that allow projects to become clearer in the eyes of potential funders and transmit valuable information about the venture in a coherent fashion regarding the medium they are raising funds from. This dissertation aims to explore strategies entrepreneurs can use to increase their performance in crowdfunding by understanding how investors make sense of projects and how they evaluate them given the nature of the platform used by the entrepreneur. This thesis contributes to the literature on crowdfunding, categorization, and platforms. The thesis first explores how entrepreneurs can use categories and narrative strategies as strategic levers to improve their performance by lowering the level of ambiguity of their offer while aligning their narrative strategies to the expectations of the platform they use. On a second level, the dissertation provides a deeper understanding of the relation that exists between category spanning, ambiguity, and creativity by addressing this relatively unexplored path. Categorization theory is further enriched through a closer examination of the importance of semantic networks and visuals in the sense making process by using a novel empirical approach. Visuals are of particular interest given they were of seminal importance at the foundation of categorization theory, are processed by different cognitive means than words, and are of vital importance in today’s world. Finally, the dissertation explores the relation between platforms and narratives by theorizing that the former are particular types of organizations whose identity is forged by their internal and external stakeholders. Platform identities are vulnerable to change such as exogenous shocks. Entrepreneurs need to learn how to identify these identities and potential changes in order to tailor their narrative strategies in the hopes of increasing their performance.

Page generated in 0.0949 seconds