Spelling suggestions: "subject:"computer algorithms"" "subject:"aomputer algorithms""
431 |
Real-time implementation of signal processing algorithms for cochlear implant applications /Ramachandran, Rohith, January 2008 (has links)
Thesis (M.S.)--University of Texas at Dallas, 2008. / Includes vita. Includes bibliographical references (leaves 75-78)
|
432 |
Higher order hierarchal curvilinear triangular vector elements for the finite element method in computational electromagneticsMarais, Neilen 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2003. / ENGLISH ABSTRACT: The Finite Element Method (FEM) as applied to Computational Electromagnetics
(CEM), can be used to solve a large class of Electromagnetics
problems with high accuracy, and good computational efficiency. Computational
efficiency can be improved by using element basis functions of higher
order. If, however, the chosen element type is not able to accurately discretise
the computational domain, the converse might be true. This paper
investigates the application of elements with curved sides, and higher order
basis functions, to computational domains with curved boundaries. It
is shown that these elements greatly improve the computational efficiency
of the FEM applied to such domains, as compared to using elements with
straight sides, and/or low order bases. / AFRIKAANSE OPSOMMING: Die Eindige Element Metode (EEM) kan breedvoerig op Numeriese Elektromagnetika
toegepas word, met uitstekende akkuraatheid en 'n hoë doeltreffendheids
vlak. Numeriese doeltreffendheid kan verbeter word deur van
hoër orde element basisfunksies gebruik te maak. Indien die element egter
nie die numeriese domein effektief kan diskretiseer nie, mag die omgekeerde
geld. Hierdie tesis ondersoek die toepassing van elemente met geboë sye,
en hoër orde basisfunksies, op numeriese domeine met geboë grense. Daar
word getoon dat sulke elemente 'n noemenswaardinge verbetering in die numeriese
doeltreffendheid van die EEM meebring, vergeleke met reguit- en/of
laer-orde elemente.
|
433 |
Inferring diffusion models with structural and behavioral dependency in social networksBao, Qing 23 August 2016 (has links)
Online social and information networks, like Facebook and Twitter, exploit the influence of neighbors to achieve effective information sharing and spreading. The process that information is spread via the connected nodes in social and information networks is referred to as diffusion. In the literature, a number of diffusion models have been proposed for different applications like influential user identification and personalized recommendation. However, comprehensive studies to discover the hidden diffusion mechanisms governing the information diffusion using the data-driven paradigm are still lacking. This thesis research aims to design novel diffusion models with the structural and behaviorable dependency of neighboring nodes for representing social networks, and to develop computational algorithms to infer the diffusion models as well as the underlying diffusion mechanisms based on information cascades observed in real social networks. By incorporating structural dependency and diversity of node neighborhood into a widely used diffusion model called Independent Cascade (IC) Model, we first propose a component-based diffusion model where the influence of parent nodes is exerted via connected components. Instead of estimating the node-based diffusion probabilities as in the IC Model, component-based diffusion probabilities are estimated using an expectation maximization (EM) algorithm derived under a Bayesian framework. Also, a newly derived structural diversity measure namely dynamic effective size is proposed for quantifying the dynamic information redundancy within each parent component. The component-based diffusion model suggests that node connectivity is a good proxy to quantify how a node's activation behavior is affected by its node neighborhood. To model directly the behavioral dependency of node neighborhood, we then propose a co-activation pattern based diffusion model by integrating the latent class model into the IC Model where the co-activation patterns of parent nodes form the latent classes for each node. Both the co-activation patterns and the corresponding pattern-based diffusion probabilities are inferred using a two-level EM algorithm. As compared to the component-based diffusion model, the inferred co-activation patterns can be interpreted as the soft parent components, providing insights on how each node is influenced by its neighbors as reflected by the observed cascade data. With the motivation to discover a common set of the over-represented temporal activation patterns (motifs) characterizing the overall diffusion in a social network, we further propose a motif-based diffusion model. By considering the temporal ordering of the parent activations and the social roles estimated for each node, each temporal activation motif is represented using a Markov chain with the social roles being its states. Again, a two-level EM algorithm is proposed to infer both the temporal activation motifs and the corresponding diffusion network simultaneously. The inferred activation motifs can be interpreted as the underlying diffusion mechanisms characterizing the diffusion happening in the social network. Extensive experiments have been carried out to evaluate the performance of all the proposed diffusion models using both synthetic and real data. The results obtained and presented in the thesis demonstrate the effectiveness of the proposed models. In addition, we discuss in detail how to interpret the inferred co-activation patterns and interaction motifs as the diffusion mechanisms under the context of different real social network data sets.
|
434 |
An evaluation of local two-frame dense stereo matching algorithmsVan der Merwe, Juliaan Werner 06 June 2012 (has links)
M. Ing. / The process of extracting depth information from multiple two-dimensional images taken of the same scene is known as stereo vision. It is of central importance to the field of machine vision as it is a low level task required for many higher level applications. The past few decades has witnessed the development of hundreds of different stereo vision algorithms. This has made it difficult to classify and compare the various approaches to the problem. In this research we provide an overview of the types of approaches that exist to solve the problem of stereo vision. We focus on a specific subset of algorithms, known as local stereo algorithms. Our goal is to critically analyse and compare a representative sample of local stereo algorithm in terms of both speed and accuracy. We also divide the algorithms into discrete interchangeable components and experiment to determine the effect that each of the alternative components has on an algorithm’s speed and accuracy. We investigate even further to quantify and analyse the effect of various design choices within specific algorithm components. Finally we assemble all of the knowledge gained through the experimentation to compose and optimise a novel algorithm. The experimentation highlighted the fact that by far the most important component of a local stereo algorithm is the manner in which it aggregates matching costs. All of the top performing local stereo algorithms dynamically define the shape of the windows over which the matching costs are aggregated. This is done in a manner that aims to only include pixels in a window that is likely to be at the same depth as the depth of the centre pixel of the window. Since the depth is unknown, the cost aggregation techniques use colour and proximity information to best guess whether pixels are at the same depth when defining the shape of the aggregation windows. Local stereo algorithms are usually less accurate than global methods but they are supposed to be faster and more parallelisable. These cost aggregation techniques result in very accurate depth estimates but unfortunately they are also very expensive computationally. We believe the focus of local stereo algorithm development should be speed. Using the experimental results we developed an algorithm that achieves accuracies in the same order of magnitude as the state-of-the-art algorithms while reducing the computation time by over 50%.
|
435 |
Overlapping community detection exploiting direct dependency structures in complex networksLiang, Fengfeng 30 August 2017 (has links)
Many important applications in the social, ecological, epidemiological, and biological sciences can be modeled as complex systems in which a node or variable interacts with another via the edges in the network. Community detection has been known to be important in obtaining insights into the network structure characteristics of these complex systems. The existing community detection methods often assume that the pairwise interaction data between nodes are already available, and they simply apply the detection algorithms to the network. However, the predefined network might contain inaccurate structures as a result of indirect effects that stem from the nodes' high-order interactions, which poses challenges for the algorithms upon which they are built. Meanwhile, existing methods to infer the direct interaction relationships suffer from the difficulty in identifying the cut point value that differentiates the direct interactions from the indirect interactions. In this thesis, we consider the overlapping community detection problem with determination and integration of the structural information of direct dependency interactions. We propose a new overlapping community detection model, named direct-dependency-based nonnegative matrix factorization (DNMF), that exploits the Bayesian framework for pairwise ordering to incorporate the structural information of the underlying network. To evaluate the effectiveness and efficiency of the proposed method, we compare it with state-of-the-art methods on benchmark datasets collected from different domains. Our empirical results show that after the incorporation of a direct dependency network, significant improvement is seen in the community detection performance in networks with homophilic effects.
|
436 |
Linear UnificationWilbanks, John W. (John Winston) 12 1900 (has links)
Efficient unification is considered within the context of logic programming. Unification is explained in terms of equivalence classes made up of terms, where there is a constraint that no equivalence class may contain more than one function term. It is demonstrated that several well-known "efficient" but nonlinear unification algorithms continually maintain the said constraint as a consequence of their choice of data structure for representing equivalence classes. The linearity of the Paterson-Wegman unification algorithm is shown largely to be a consequence of its use of unbounded lists of pointers for representing equivalences between terms, which allows it to avoid the nonlinearity of "union-find".
|
437 |
Optimising the frequency assignment problem utilizing particle swarm optimisationBezuidenhout, William 08 October 2014 (has links)
M.Sc. (Information Technology) / A new particle swarm optimisation (PSO) algorithm that produces solutions to the xed spectrum frequency assignment problem (FS-FAP) is presented. Solutions to the FS-FAP are used to allocate frequencies in a mobile telecommunications network and must have low interference. The standard PSO algorithm's velocity method and global selection is ill suited for the frequency assignment problem (FAP). Therefore using the standard PSO algorithm as base, new techniques are developed to allow it to operate on the FAP. The new techniques include two velocity methods and three global selection schemes. This study presents the results of the algorithm operating on the Siemens set of COST 259 problems and shows that it is viable applying the PSO to the FAP.
|
438 |
A Geometric Approach to Dynamical System: Global Analysis for Non-Convex OptimizationXu, Ji January 2020 (has links)
Non-convex optimization often plays an important role in many machine learning problems. Study the existing algorithms that aim to solve the non-convex optimization problems can help us understand the optimization problem itself and may shed light on developing more effective algorithms or methods. In this thesis, we study two popular non-convex optimization problems along with two popular algorithms.
The first pair is maximum likelihood estimation with the expectation maximization algorithm. Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models. However, EM, which is an iterative algorithm based on the maximum likelihood principle, is generally only guaranteed to find stationary points of the likelihood objective, and these points may be far from any maximizer. We address this disconnect between the statistical principles behind EM and its algorithmic properties.
Specifically, we provide a global analysis of EM for specific models in which the observations comprise an i.i.d.~sample from a mixture of two Gaussians. This is achieved by (i) studying the sequence of parameters from idealized execution of EM in the infinite sample limit, and fully characterizing the limit points of the sequence in terms of the initial parameters; and then (ii) based on this convergence analysis, establishing statistical consistency (or lack thereof) for the actual sequence of parameters produced by EM.
The second pair is phase retrieval problem with approximate message passing algorithm. Specifically, we consider an $\ell_2$-regularized non-convex optimization problem for recovering signals from their noisy phaseless observations. We design and study the performance of a message passing algorithm that aims to solve this optimization problem. We consider the asymptotic setting $m,n \rightarrow \infty$, $m/n \rightarrow \delta$ and obtain sharp performance bounds, where $m$ is the number of measurements and $n$ is the signal dimension. We show that for complex signals the algorithm can perform accurate recovery with only $m= \frac{64}{\pi^2}-4 \approx 2.5n$ measurements. The sharp analyses in this paper enable us to compare the performance of our method with other phase recovery schemes.
Finally, the convergence analysis of the iterative algorithms are done by a geometric approach to dynamical systems. By analyzing the movements from iteration to iteration, we provide a general tool that can show global convergence for many two dimensional dynamical systems. We hope this can shed light on convergence analysis for general dynamical systems.
|
439 |
Resource Allocation In Large-Scale Distributed SystemsShafiee, Mehrnoosh January 2021 (has links)
The focus of this dissertation is design and analysis of scheduling algorithms for distributed computer systems, i.e., data centers. Today’s data centers can contain thousands of servers and typically use a multi-tier switch network to provide connectivity among the servers. Data centers are the host for execution of various data-parallel applications. As an abstraction, a job in a data center can be thought of as a group of interdependent tasks, each with various requirements which need to be scheduled for execution on the servers and the data flows between the tasks that need to be scheduled in the switch network. In this thesis, we study both flow and task scheduling problems under the features of modern parallel computing frameworks.For the flow scheduling problem, we study three models.
The first model considers a general network topology where flows among the various source-destination pairs of servers are generated dynamically over time. The goal is to assign the end-to-end data flows among the available paths in order to efficiently balance the load in the network. We propose a myopic algorithm that is computationally efficient and prove that it asymptotically minimizes the total network cost using a convex optimization model, fluid limit and Lyapunov analysis. We further propose randomized versions of our myopic algorithm.
The second model consider the case that there is dependence among flows. Specifically, a coflow is defined as a collection of parallel flows whose completion time is determined by the completion time of the last flow in the collection. Our main result is a 5-approximation deterministic algorithm that schedule coflows in polynomial time so as to minimize the total weighted completion times. The key ingredient of our approach is an improved linear program formulation for sorting the coflows followed by a simple list scheduling policy.
Lastly, we study scheduling coflows of multi-stage jobs to minimize the jobs’ total weighted completion times. Each job is represented by a DAG (Directed Acyclic Graph) among its coflows that captures the dependencies among the coflows. We define g(m) = log(m)/log(log(m)) and h(m, μ) = log(mμ)/(log(log(mμ)), where m is number of servers, μ is the maximum number of coflows in a job. We develop two algorithms with approximation ratios O(√μg(m)) and O(√μg(m)h(m, μ)) for jobs with general DAGs and rooted trees, respectively. The algorithms rely on random delaying and merging optimal schedules of the coflows in the jobs’ DAG, followed by enforcing dependency among coflows and the links’ capacity constraints.
For the task scheduling problem, we study two models. We consider a setting where each job consists of a set of parallel tasks that need to be processed on different servers, and the job is completed once all its tasks finish processing. In the first model, each job is associated with a utility which is a decreasing function of its completion time. The objective is to schedule tasks in a way that achieves max-min fairness for jobs’ utilities. We first show a strong result regarding NP-hardness of this problem. We then proceed to define two notions of approximation solutions and develop scheduling algorithms that provide guarantees under these approximation notions, using dynamic programming and random perturbation of tasks’ processing times. In the second model, we further assume that processing times of tasks can be server dependent and a server can process (pack) multiple tasks at the same time subject to its capacity. We then propose three algorithms with approximation ratios of 4, (6 + ε), and 24 for different cases where preemption and migration of tasks among the servers are or are not allowed. Our algorithms use a combination of linear program relaxation and greedy packing techniques.
To demonstrate the gains in practice, we evaluate all the proposed algorithms and compare their performances with the prior approaches through extensive simulations using real and synthesized traffic traces. We hope this work inspires improvements to existing job management and scheduling in distributed computer systems.
|
440 |
New Methods in Sublinear Computation for High Dimensional ProblemsWaingarten, Erik Alex January 2020 (has links)
We study two classes of problems within sublinear algorithms: data structures for approximate nearest neighbor search, and property testing of Boolean functions. We develop algorithmic and analytical tools for proving upper and lower bounds on the complexity of these problems, and obtain the following results:
* We give data structures for approximate nearest neighbor search achieving state-of-the-art approximations for various high-dimensional normed spaces. For example, our data structure for 𝘢𝘳𝘣𝘪𝘵𝘳𝘢𝘳𝘺 normed spaces over R𝘥 answers queries in sublinear time while using nearly linear space and achieves approximation which is sub-polynomial in the dimension.
* We prove query complexity lower bounds for property testing of three fundamental properties: 𝘬-juntas, monotonicity, and unateness. Our lower bounds for non-adaptive junta testing and adaptive unateness testing are nearly optimal, and the lower bound for adaptive monotonicity testing is the best that is currently known.
* We give an algorithm for testing unateness with nearly optimal query complexity. The algorithm is crucially adaptive and based on a novel analysis of binary search over long paths of the hypercube.
|
Page generated in 0.07 seconds