• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • Tagged with
  • 164
  • 164
  • 77
  • 53
  • 35
  • 32
  • 30
  • 28
  • 27
  • 27
  • 27
  • 24
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

ON ALGORITHMS FOR THE COLOURFUL LINEAR PROGRAMMING FEASIBILITY PROBLEM

Rong, Guohong 10 1900 (has links)
<p>Given colourful sets S_1,..., S_{d+1} of points in R^d and a point p in R^d, the colourful linear programming problem is to express p as a convex combination of points x_1,...,x_{d+1} with x_i in S_i for each i. This problem was presented by Bárány and Onn in 1997, it is still not known if a polynomial-time algorithm for the problem exists. The monochrome version of this problem, expressing p as a convex combination of points in a set S, is a traditional linear programming feasibility problem. The colourful Carathéodory Theorem, due to Bárány in 1982, provides a sufficient condition for the existence of a colourful set of points containing p in its convex hull. Bárány's result was generalized by Holmsen et al. in 2008 and by Arocha et al. in 2009 before being recently further generalized by Meunier and Deza. We study algorithms for colourful linear programming under the conditions of Bárány and their generalizations. In particular, we implement the Meunier-Deza algorithm and enhance previously used random case generators. Computational benchmarking and a performance analysis including a comparison between the two algorithms of Bárány and Onn and the one of Meunier and Deza, and random picking are presented.</p> / Master of Science (MSc)
82

Neural Tabula Rasa: Foundations for Realistic Memories and Learning

Perrine, Patrick R 01 June 2023 (has links) (PDF)
Understanding how neural systems perform memorization and inductive learning tasks are of key interest in the field of computational neuroscience. Similarly, inductive learning tasks are the focus within the field of machine learning, which has seen rapid growth and innovation utilizing feedforward neural networks. However, there have also been concerns regarding the precipitous nature of such efforts, specifically in the area of deep learning. As a result, we revisit the foundation of the artificial neural network to better incorporate current knowledge of the brain from computational neuroscience. More specifically, a random graph was chosen to model a neural system. This random graph structure was implemented along with an algorithm for storing information, allowing the network to create memories by creating subgraphs of the network. This implementation was derived from a proposed neural computation system, the Neural Tabula Rasa, by Leslie Valiant. Contributions of this work include a new approximation of memory size, several algorithms for implementing aspects of the Neural Tabula Rasa, and empirical evidence of the functional form for memory capacity of the system. This thesis intends to benefit the foundations of learning systems, as the ability to form memories is required for a system to inductively learn.
83

Wasm-PBChunk: Incrementally Developing A Racket-To-Wasm Compiler Using Partial Bytecode Compilation

Perlin, Adam C 01 June 2023 (has links) (PDF)
Racket is a modern, general-purpose programming language with a language-oriented focus. To date, Racket has found notable uses in research and education, among other applications. To expand the reach of the language, there has been a desire to develop an efficient platform for running Racket in a web-based environment. WebAssembly (Wasm) is a binary executable format for a stack-based virtual machine designed to provide a fast, efficient, and secure execution environment for code on the web. Wasm is primarily intended to be a compiler target for higher-level languages. Providing Wasm support for the Racket project may be a promising way to bring Racket to the browser. To this end, we present an incremental approach to the development of a Racket- to-Wasm compiler. We make use of an existing backend for Racket that targets a portable bytecode known as PB, along with the associated PB interpreter. We per- form an ahead-of-time static translation of sections of PB into native Wasm, linking the chunks back to the interpreter before execution. By replacing portions of PB with native Wasm, we can eliminate some portion of interpretation overhead and move closer to native Wasm support for Chez Scheme (Racket’s Backend). Due to the use of an existing backend and interpreter, our approach already supports nearly all features of the Racket language – including delimited continuations, tail-calling behavior, and garbage collection – and excluding threading and FFI support for the time being. We perform benchmarks against a baseline to validate our approach, to promising results.
84

A Heuristic Evolutionary Method for the Complementary Cell Suppression Problem

Herrington, Hira B. 04 February 2015 (has links)
Cell suppression is a common method for disclosure avoidance used to protect sensitive information in two-dimensional tables where row and column totals are published along with non-sensitive data. In tables with only positive cell values, cell suppression has been demonstrated to be non-deterministic NP-hard. Therefore, finding more efficient methods for producing low-cost solutions is an area of active research. Genetic algorithms (GA) have shown to be effective in finding good solutions to the cell suppression problem. However, these methods have the shortcoming that they tend to produce a large proportion of infeasible solutions. The primary goal of this research was to develop a GA that produced low-cost solutions with fewer infeasible solutions created at each generation than previous methods without introducing excessive CPU runtime costs. This research involved developing a GA that produces low-cost solutions with fewer infeasible solutions produced at each generation; and implementing selection and replacement operations that maintained genetic diversity during the evolution process. The GA's performance was tested using tables containing 10,000 and 100,000 cells. The primary criterion for the evaluation of effectiveness of the GA was total cost of the complementary suppressions and the CPU runtime. Experimental results indicate that the GA-based method developed in this dissertation produced better quality solutions than those produced by extant heuristics. Because existing heuristics are very effective, this GA-based method was able to surpass them only modestly. Existing evolutionary methods have also been used to improve upon the quality of solutions produced by heuristics. Experimental results show that the GA-based method developed in this dissertation is computationally more efficient than GA-based methods proposed in the literature. This is attributed to the fact that the specialized genetic operators designed in this study produce fewer infeasible solutions. The results of these experiments suggest the need for continued research into non-probabilistic methods to seed the initial populations, selection and replacement strategies that factor in genetic diversity on the level of the circuits protecting sensitive cells; solution-preserving crossover and mutation operators; and the use of cost benefit ratios to determine program termination.
85

Algorithmic Foundations of Heuristic Search using Higher-Order Polygon Inequalities

Campbell, Newton Henry, Jr. 01 January 2016 (has links)
The shortest path problem in graphs is both a classic combinatorial optimization problem and a practical problem that admits many applications. Techniques for preprocessing a graph are useful for reducing shortest path query times. This dissertation studies the foundations of a class of algorithms that use preprocessed landmark information and the triangle inequality to guide A* search in graphs. A new heuristic is presented for solving shortest path queries that enables the use of higher order polygon inequalities. We demonstrate this capability by leveraging distance information from two landmarks when visiting a vertex as opposed to the common single landmark paradigm. The new heuristic’s novel feature is that it computes and stores a reduced amount of preprocessed information (in comparison to previous landmark-based algorithms) while enabling more informed search decisions. We demonstrate that domination of this heuristic over its predecessor depends on landmark selection and that, in general, the denser the landmark set, the better heuristic performs. Due to the reduced memory requirement, this new heuristic admits much denser landmark sets. We conduct experiments to characterize the impact that landmark configurations have on this new heuristic, demonstrating that centrality-based landmark selection has the best tradeoff between preprocessing and runtime. Using a developed graph library and static information from benchmark road map datasets, the algorithm is compared experimentally with previous landmark-based shortest path techniques in a fixed-memory environment to demonstrate a reduction in overall computational time and memory requirements. Experimental results are evaluated to detail the significance of landmark selection and density, the tradeoffs of performing preprocessing, and the practical use cases of the algorithm.
86

Automatically Defined Templates for Improved Prediction of Non-stationary, Nonlinear Time Series in Genetic Programming

Moskowitz, David 01 January 2016 (has links)
Soft methods of artificial intelligence are often used in the prediction of non-deterministic time series that cannot be modeled using standard econometric methods. These series, such as occur in finance, often undergo changes to their underlying data generation process resulting in inaccurate approximations or requiring additional human judgment and input in the process, hindering the potential for automated solutions. Genetic programming (GP) is a class of nature-inspired algorithms that aims to evolve a population of computer programs to solve a target problem. GP has been applied to time series prediction in finance and other domains. However, most GP-based approaches to these prediction problems do not consider regime change. This paper introduces two new genetic programming modularity techniques, collectively referred to as automatically defined templates, which better enable prediction of time series involving regime change. These methods, based on earlier established GP modularity techniques, take inspiration from software design patterns and are more closely modeled after the way humans actually develop software. Specifically, a regime detection branch is incorporated into the GP paradigm. Regime specific behavior evolves in a separate program branch, implementing the template method pattern. A system was developed to test, validate, and compare the proposed approach with earlier approaches to GP modularity. Prediction experiments were performed on synthetic time series and on the S&P 500 index. The performance of the proposed approach was evaluated by comparing prediction accuracy with existing methods. One of the two techniques proposed is shown to significantly improve performance of time series prediction in series undergoing regime change. The second proposed technique did not show any improvement and performed generally worse than existing methods or the canonical approaches. The difference in relative performance was shown to be due to a decoupling of reusable modules from the evolving main program population. This observation also explains earlier results regarding the inferior performance of genetic programming techniques using a similar, decoupled approach. Applied to financial time series prediction, the proposed approach beat a buy and hold return on the S&P 500 index as well as the return achieved by other regime aware genetic programming methodologies. No approach tested beat the benchmark return when factoring in transaction costs.
87

MODELING, LEARNING AND REASONING ABOUT PREFERENCE TREES OVER COMBINATORIAL DOMAINS

Liu, Xudong 01 January 2016 (has links)
In my Ph.D. dissertation, I have studied problems arising in various aspects of preferences: preference modeling, preference learning, and preference reasoning, when preferences concern outcomes ranging over combinatorial domains. Preferences is a major research component in artificial intelligence (AI) and decision theory, and is closely related to the social choice theory considered by economists and political scientists. In my dissertation, I have exploited emerging connections between preferences in AI and social choice theory. Most of my research is on qualitative preference representations that extend and combine existing formalisms such as conditional preference nets, lexicographic preference trees, answer-set optimization programs, possibilistic logic, and conditional preference networks; on learning problems that aim at discovering qualitative preference models and predictive preference information from practical data; and on preference reasoning problems centered around qualitative preference optimization and aggregation methods. Applications of my research include recommender systems, decision support tools, multi-agent systems, and Internet trading and marketing platforms.
88

Singular Value Computation and Subspace Clustering

Liang, Qiao 01 January 2015 (has links)
In this dissertation we discuss two problems. In the first part, we consider the problem of computing a few extreme eigenvalues of a symmetric definite generalized eigenvalue problem or a few extreme singular values of a large and sparse matrix. The standard method of choice of computing a few extreme eigenvalues of a large symmetric matrix is the Lanczos or the implicitly restarted Lanczos method. These methods usually employ a shift-and-invert transformation to accelerate the speed of convergence, which is not practical for truly large problems. With this in mind, Golub and Ye proposes an inverse-free preconditioned Krylov subspace method, which uses preconditioning instead of shift-and-invert to accelerate the convergence. To compute several eigenvalues, Wielandt is used in a straightforward manner. However, the Wielandt deflation alters the structure of the problem and may cause some difficulties in certain applications such as the singular value computations. So we first propose to consider a deflation by restriction method for the inverse-free Krylov subspace method. We generalize the original convergence theory for the inverse-free preconditioned Krylov subspace method to justify this deflation scheme. We next extend the inverse-free Krylov subspace method with deflation by restriction to the singular value problem. We consider preconditioning based on robust incomplete factorization to accelerate the convergence. Numerical examples are provided to demonstrate efficiency and robustness of the new algorithm. In the second part of this thesis, we consider the so-called subspace clustering problem, which aims for extracting a multi-subspace structure from a collection of points lying in a high-dimensional space. Recently, methods based on self expressiveness property (SEP) such as Sparse Subspace Clustering and Low Rank Representations have been shown to enjoy superior performances than other methods. However, methods with SEP may result in representations that are not amenable to clustering through graph partitioning. We propose a method where the points are expressed in terms of an orthonormal basis. The orthonormal basis is optimally chosen in the sense that the representation of all points is sparsest. Numerical results are given to illustrate the effectiveness and efficiency of this method.
89

Graph-based Regularization in Machine Learning: Discovering Driver Modules in Biological Networks

Gao, Xi 01 January 2015 (has links)
Curiosity of human nature drives us to explore the origins of what makes each of us different. From ancient legends and mythology, Mendel's law, Punnett square to modern genetic research, we carry on this old but eternal question. Thanks to technological revolution, today's scientists try to answer this question using easily measurable gene expression and other profiling data. However, the exploration can easily get lost in the data of growing volume, dimension, noise and complexity. This dissertation is aimed at developing new machine learning methods that take data from different classes as input, augment them with knowledge of feature relationships, and train classification models that serve two goals: 1) class prediction for previously unseen samples; 2) knowledge discovery of the underlying causes of class differences. Application of our methods in genetic studies can help scientist take advantage of existing biological networks, generate diagnosis with higher accuracy, and discover the driver networks behind the differences. We proposed three new graph-based regularization algorithms. Graph Connectivity Constrained AdaBoost algorithm combines a connectivity module, a deletion function, and a model retraining procedure with the AdaBoost classifier. Graph-regularized Linear Programming Support Vector Machine integrates penalty term based on submodular graph cut function into linear classifier's objective function. Proximal Graph LogisticBoost adds lasso and graph-based penalties into logistic risk function of an ensemble classifier. Results of tests of our models on simulated biological datasets show that the proposed methods are able to produce accurate, sparse classifiers, and can help discover true genetic differences between phenotypes.
90

Identifying Parameters for Robust Network Growth using Attachment Kernels: A case study on directed and undirected networks

Abdelzaher, Ahmed F 01 January 2016 (has links)
Network growing mechanisms are used to construct random networks that have structural behaviors similar to existing networks such as genetic networks, in efforts of understanding the evolution of complex topologies. Popular mechanisms, such as preferential attachment, are capable of preserving network features such as the degree distribution. However, little is known about such randomly grown structures regarding robustness to disturbances (e.g., edge deletions). Moreover, preferential attachment does not target optimizing the network's functionality, such as information flow. Here, we consider a network to be optimal if it's natural functionality is relatively high in addition to possessing some degree of robustness to disturbances. Specifically, a robust network would continue to (1) transmit information, (2) preserve it's connectivity and (3) preserve internal clusters post failures. In efforts to pinpoint features that would possibly replace or collaborate with the degree of a node as criteria for preferential attachment, we present a case study on both; undirected and directed networks. For undirected networks, we make a case study on wireless sensor networks in which we outline a strategy using Support Vector Regression. For Directed networks, we formulate an Integer Linear Program to gauge the exact transcriptional regulatory network optimal structures, from there on we can identify variations in structural features post optimization.

Page generated in 0.0596 seconds