Spelling suggestions: "subject:"1heory off algorithms"" "subject:"1heory oof algorithms""
111 |
Advances in Piecewise Smooth Image ReconstructionJuengling, Ralf 17 March 2014 (has links)
Advances and new insights into algorithms for piecewise smooth image reconstruction are presented. Such algorithms fit a piecewise smooth function to image data without prior knowledge of the number of regions or the location of region boundaries in the best fitting function. This is a difficult model selection problem since the number of parameters of possible solutions varies widely.
The approach followed in this work was proposed by Yvan Leclerc. It uses the Minimum Description Length principle to make the reconstruction problem well-posed: the best fitting function yields the shortest encoding of the image data. In order to derive a code length formula, the class of functions is restricted to piecewise polynomial. The resulting optimization problem may have many local minima, and a good initial approximation is required in order to find acceptable solutions. Good initial approximations may be generated at the cost of solving a sequence of related optimization problems, as prescribed by a continuation method.
Several problems with this approach are identified and addressed. First, success or failure of the continuation method is found to be sensitive to the choice of objective function parameters. Second, the optimization method used in prior work may fail to converge, and, third, it converges too slowly to be useful in many vision applications.
I address the first problem in three different ways. First, a revised continuation method is less sensitive to parameter choice. Second, I show how to move control over success or failure from the objective function parameters to the continuation method. Third, a new objective function is derived which includes one parameter instead of the two parameters used in prior work. Experimental results show that all measures improve robustness with respect to parameter choice.
In order to address the optimization-related problems I use a quasi-Newton line-search method. This method is guaranteed to converge and may converge at a faster rate than the relaxation method used in prior work. To realize a faster convergence rate, I introduce a new parameter whose role is to improve variable scaling and problem conditioning. Further runtime improvements result from using extrapolation in the continuation method. Experimental results show overall runtime improvements of an order of magnitude and more.
My reconstruction algorithm performs superior to the well-known Canny edge detector on the Berkeley boundary detection task. This is a novel result that demonstrates the merits of image reconstruction as a means for extracting information from an image.
|
112 |
CITY NETWORK RESILIENCE QUANTIFICATION UNDER SYSTEMIC RISKS: A HYBRID MACHINE LEARNING-GENETIC ALGORITHM APPROACHHassan, Rasha January 2020 (has links)
Disruptions due to either natural or anthropogenic hazards significantly impact the operation of critical infrastructure networks because they may instigate network-level cascade (i.e., systemic) risks. Therefore, quantifying and enhancing the resilience of such complex dynamically evolving networks ensure minimizing the possibility and consequences of systemic risks. Focusing only on robustness, as one of the key resilience attributes, and on transportation networks, key critical infrastructure, the current study develops a hybrid complex network theoretic-genetic algorithms analysis approach. To demonstrate the developed approach, the robustness of a city transportation network is quantified by integrating complex network theoretic topology measures with a dynamic flow redistribution model. The network robustness is subsequently investigated under different operational measures and the corresponding absorptive capacity thresholds are quantified. Finally, the robustness of the network under different failure scenarios is evaluated using genetic algorithms coupled with k-means clustering to classify the different network components. The hybrid approach developed in the current study is expected to facilitate optimizing potential systemic risk mitigation strategies for critical infrastructure networks under disruptive events. / Thesis / Master of Applied Science (MASc)
|
113 |
A Bridge between Graph Neural Networks and Transformers: Positional Encodings as Node EmbeddingsManu, Bright Kwaku 01 December 2023 (has links) (PDF)
Graph Neural Networks and Transformers are very powerful frameworks for learning machine learning tasks. While they were evolved separately in diverse fields, current research has revealed some similarities and links between them. This work focuses on bridging the gap between GNNs and Transformers by offering a uniform framework that highlights their similarities and distinctions. We perform positional encodings and identify key properties that make the positional encodings node embeddings. We found that the properties of expressiveness, efficiency and interpretability were achieved in the process. We saw that it is possible to use positional encodings as node embeddings, which can be used for machine learning tasks such as node classification, graph classification, and link prediction. We discuss some challenges and provide future directions.
|
114 |
Foundations Of Memory Capacity In Models Of Neural CognitionChowdhury, Chandradeep 01 December 2023 (has links) (PDF)
A central problem in neuroscience is to understand how memories are formed as a result of the activities of neurons. Valiant’s neuroidal model attempted to address this question by modeling the brain as a random graph and memories as subgraphs within that graph. However the question of memory capacity within that model has not been explored: how many memories can the brain hold? Valiant introduced the concept of interference between memories as the defining factor for capacity; excessive interference signals the model has reached capacity. Since then, exploration of capacity has been limited, but recent investigations have delved into the capacity of the Assembly Calculus, a derivative of Valiant's Neuroidal model. In this paper, we provide rigorous definitions for capacity and interference and present theoretical formulations for the memory capacity within a finite set, where subsets represent memories. We propose that these results can be adapted to suit both the Neuroidal model and Assembly calculus. Furthermore, we substantiate our claims by providing simulations that validate the theoretical findings. Our study aims to contribute essential insights into the understanding of memory capacity in complex cognitive models, offering potential ideas for applications and extensions to contemporary models of cognition.
|
115 |
The (Nested) Word Problem: Formal Languages, Group Theory, and Languages of Nested WordsHenry, Christopher S. 10 1900 (has links)
<p>This thesis concerns itself with drawing out some interesting connections between the fields of group theory and formal language theory. Given a group with a finite set of generators, it is natural to consider the set of generators and their inverses as an alphabet. We can then consider formal languages such that every group element has at least one representative in the language. We examine what the structure of the language can tell us about group theoretic properties, focusing on the word problem, automatic structures on groups, and generalizations of automatic structures. Finally we prove new results concerning applications of languages of nested words for studying the word problem.</p> / Master of Science (MSc)
|
116 |
On the Parallelization of a Search for Counterexamples to a Conjecture of Erd\H{o}sShen, ShengWei 10 1900 (has links)
<p>Denote by $k_t(G)$ the number of cliques of order $t$ in a graph $G$ having $n$ vertices. Let $k_t(n) = \min\{k_t(G)+k_t(\overline{G}) \}$ where $\overline{G}$ denotes the complement of $G$. Let $c_t(n) = {k_t(n)}/{\tbinom{n}{t}}$ and $c_t$ be the limit of $c_t(n)$ for $n$ going to infinity. A 1962 conjecture of Erd\H{o}s stating that $c_t = 2^{1-\tbinom{t}{2}}$ was disproved by Thomason in 1989 for all $t\geq 4$. Tighter counterexamples have been constructed by Jagger, {\v S}{\v t}ov{\' \i}{\v c}ek and Thomason in 1996, by Thomason for $t\leq 6$ in 1997, and by Franek for $t=6$ in 2002. Further tightenings $t=6,7$ and $8$ was recently obtained by Deza, Franek, and Liu.</p> <p>We investigate the computational framework used by Deza, Franek, and Liu. In particular, we present the benefits and limitations of different parallel computer memory architectures and parallel programming models. We propose a functional decomposition approach which is implemented in C++ with POSIX thread (Pthread) libraries for multi-threading. Computational benchmarking on the parallelized framework and a performance analysis including a comparison with the original computational framework are presented.</p> / Master of Science (MSc)
|
117 |
Duality theory for optimal mechanism designGiannakopoulos, Ioannis January 2015 (has links)
In this work we present a general duality-theory framework for revenue maximization in additive Bayesian auctions involving multiple items and many bidders whose values for the goods follow arbitrary continuous joint distributions over some multi-dimensional real interval. Although the single-item case has been resolved in a very elegant way by the seminal work of Myerson [1981], optimal solutions involving more items still remain elusive. The framework extends linear programming duality and complementarity to constraints with partial derivatives. The dual system reveals the natural geometric nature of the problem and highlights its connection with the theory of bipartite graph matchings. We demonstrate the power of the framework by applying it to various special monopoly settings where a seller of multiple heterogeneous goods faces a buyer with independent item values drawn from various distributions of interest, to design both exact and approximately optimal selling mechanisms. Previous optimal solutions were only known for up to two and three goods, and a very limited range of distributional priors. The duality framework is used not only for proving optimality, but perhaps more importantly, for deriving the optimal mechanisms themselves. Some of our main results include: the proposal of a simple deterministic mechanism, which we call Straight-Jacket Auction (SJA) and is defined in a greedy, recursive way through natural geometric constraints, for many uniformly distributed goods, where exact optimality is proven for up to six items and general optimality is conjectured; a scheme of sufficient conditions for exact optimality for two-good settings and general independent distributions; a technique for upper-bounding the optimal revenue for arbitrarily many goods, with an application to uniform and exponential priors; and the proof that offering deterministically all items in a single full bundle is the optimal way of selling multiple exponentially i.i.d. items.
|
118 |
Triple Non-negative Matrix Factorization Technique for Sentiment Analysis and Topic ModelingWaggoner, Alexander A 01 January 2017 (has links)
Topic modeling refers to the process of algorithmically sorting documents into categories based on some common relationship between the documents. This common relationship between the documents is considered the “topic” of the documents. Sentiment analysis refers to the process of algorithmically sorting a document into a positive or negative category depending whether this document expresses a positive or negative opinion on its respective topic. In this paper, I consider the open problem of document classification into a topic category, as well as a sentiment category. This has a direct application to the retail industry where companies may want to scour the web in order to find documents (blogs, Amazon reviews, etc.) which both speak about their product, and give an opinion on their product (positive, negative or neutral). My solution to this problem uses a Non-negative Matrix Factorization (NMF) technique in order to determine the topic classifications of a document set, and further factors the matrix in order to discover the sentiment behind this category of product.
|
119 |
An Algorithm for the Machine Calculation of Minimal PathsWhitinger, Robert 01 August 2016 (has links)
Problems involving the minimization of functionals date back to antiquity. The mathematics of the calculus of variations has provided a framework for the analytical solution of a limited class of such problems. This paper describes a numerical approximation technique for obtaining machine solutions to minimal path problems. It is shown that this technique is applicable not only to the common case of finding geodesics on parameterized surfaces in R3, but also to the general case of finding minimal functionals on hypersurfaces in Rn associated with an arbitrary metric.
|
120 |
Vertex Weighted Spectral ClusteringMasum, Mohammad 01 August 2017 (has links)
Spectral clustering is often used to partition a data set into a specified number of clusters. Both the unweighted and the vertex-weighted approaches use eigenvectors of the Laplacian matrix of a graph. Our focus is on using vertex-weighted methods to refine clustering of observations. An eigenvector corresponding with the second smallest eigenvalue of the Laplacian matrix of a graph is called a Fiedler vector. Coefficients of a Fiedler vector are used to partition vertices of a given graph into two clusters. A vertex of a graph is classified as unassociated if the Fiedler coefficient of the vertex is close to zero compared to the largest Fiedler coefficient of the graph. We propose a vertex-weighted spectral clustering algorithm which incorporates a vector of weights for each vertex of a given graph to form a vertex-weighted graph. The proposed algorithm predicts association of equidistant or nearly equidistant data points from both clusters while the unweighted clustering does not provide association. Finally, we implemented both the unweighted and the vertex-weighted spectral clustering algorithms on several data sets to show that the proposed algorithm works in general.
|
Page generated in 0.052 seconds