Spelling suggestions: "subject:"algorithm"" "subject:"allgorithm""
431 |
A Grid-Based Approximation Algorithm for the Minimum Weight Triangulation ProblemWessels, Mariette Christine 06 June 2017 (has links)
Given a set of n points on a plane, in the Minimum Weight Triangulation problem, we wish to find a triangulation that minimizes the sum of Euclidean lengths of its edges. The problem has been studied for more than four decades and is known to be incredibly challenging. In fact, the complexity status of this problem remained open until recently when it was shown to be NP-Hard. We present a novel polynomial-time algorithm that computes a 16-approximation of the minimum weight triangulation---a constant that is significantly smaller than what has been previously known.
To construct our candidate solution, our algorithm uses grids to partition edges into levels by increasing weights, so that edges with similar weights appear in the same level. We incrementally triangulate the point set by constructing a growing partial triangulation for each level, introducing edges in increasing order of level. At each level, we use a variant of the ring heuristic followed by a greedy heuristic to add edges, finally resulting in a complete triangulation of the point set. In our analysis, we reduce the problem of comparing the weight of the candidate and the optimal solutions to a comparison between the cardinality of the two underlying graphs. We develop a new technique to compare the cardinality of planar straight-line graphs, and in combination with properties due to the imposed grid structure, we bound the approximation ratio. / Master of Science / Given a set of n points on a plane P, a triangulation of P is a set of edges such that no two edges intersect at a point not in P, and the edges subdivide the convex hull of P into triangles. Triangulations have a variety of applications, including computer graphics, finite element analysis, and interpolation, which motivates the need for efficient algorithms to compute triangulations with desirable qualities. The Minimum Weight Triangulation problem is the problem of computing the triangulation T that minimizes the sum of Euclidean lengths of its edges and performs well in many of the above-mentioned applications. The problem has been studied for more than four decades and is known to be incredibly challenging. In fact, the complexity status of this problem remained open until recently when it was shown to be NP-Hard. We present a novel polynomial-time algorithm that computes a 16-approximation of the minimum weight triangulation—a constant that is significantly smaller than what has been previously known. The algorithm makes use of grids together with a variant of the ring and greedy heuristic adapted to apply in a new setting, resulting in an elegant, efficient algorithm.
|
432 |
Estimating Reachability Set Sizes in Dynamic GraphsAji, Sudarshan Mandayam 01 July 2014 (has links)
Graphs are a commonly used abstraction for diverse kinds of interactions, e.g., on Twitter and Facebook. Different kinds of topological properties of such graphs are computed for gaining insights into their structure. Computing properties of large real networks is computationally very challenging. Further, most real world networks are dynamic, i.e., they change over time. Therefore there is a need for efficient dynamic algorithms that offer good space-time trade-offs. In this thesis we study the problem of computing the reachability set size of a vertex, which is a fundamental problem, with applications in databases and social networks. We develop the first Giraph based algorithms for different dynamic versions of these problems, which scale to graphs with millions of edges. / Master of Science
|
433 |
A Gillespie-Type Algorithm for Particle Based Stochastic Model on LatticeLiu, Weigang January 2019 (has links)
In this thesis, I propose a general stochastic simulation algorithm for particle based lattice model using the concepts of Gillespie's stochastic simulation algorithm, which was originally designed for well-stirred systems. I describe the details about this method and analyze its complexity compared with the StochSim algorithm, another simulation algorithm originally proposed to simulate stochastic lattice model. I compare the performance of both algorithms with application to two different examples: the May-Leonard model and Ziff-Gulari-Barshad model. Comparison between the simulation results from both algorithms has validate our claim that our new proposed algorithm is comparable to the StochSim in simulation accuracy. I also compare the efficiency of both algorithms using the CPU cost of each code and conclude that the new algorithm is as efficient as the StochSim in most test cases, while performing even better for certain specific cases. / Computer simulation has been developed for almost one century. Stochastic lattice model, which follows the physics concept of lattice, is defined as a kind of system in which individual entities live on grids and demonstrate certain random behaviors according to certain specific rules. It is mainly studied using computer simulations. The most widely used simulation method to for stochastic lattice systems is the StochSim algorithm, which just randomly pick an entity and then determine its behavior based on a set of specific random rules. Our goal is to develop new simulation methods so that it is more convenient to simulate and analyze stochastic lattice system. In this thesis I propose another type of simulation methods for the stochastic lattice model using totally different concepts and procedures. I developed a simulation package and applied it to two different examples using both methods, and then conducted a series of numerical experiment to compare their performance. I conclude that they are roughly equivalent and our new method performs better than the old one in certain special cases.
|
434 |
Synthesizing a Hybrid Benchmark Suite with BenchPrimeWu, Xiaolong 09 October 2018 (has links)
This paper presents BenchPrime, an automated benchmark analysis toolset that is systematic and extensible to analyze the similarity and diversity of benchmark suites. BenchPrime takes multiple benchmark suites and their evaluation metrics as inputs and generates a hybrid benchmark suite comprising only essential applications. Unlike prior work, BenchPrime uses linear discriminant analysis rather than principal component analysis, as well as selects the best clustering algorithm and the optimized number of clusters in an automated and metric-tailored way, thereby achieving high accuracy. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.
As a case study, this work for the first time compares the DenBench with the MediaBench and MiBench using four different metrics to provide a multi-dimensional understanding of the benchmark suites. For each metric, BenchPrime measures to what degree DenBench applications are irreplaceable with those in MediaBench and MiBench. This provides means for identifying an essential subset from the three benchmark suites without compromising the application balance of the full set. The experimental results show that the necessity of including DenBench applications varies across the target metrics and that significant redundancy exists among the three benchmark suites. / Master of Science / Representative benchmarks are widely used in the research area to achieve an accurate and fair evaluation of hardware and software techniques. However, the redundant applications in the benchmark set can skew the average towards redundant characteristics overestimating the benefit of any proposed research. This work proposes a machine learning-based framework BenchPrime to generates a hybrid benchmark suite comprising only essential applications. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.
|
435 |
The AlgoViz Project: Building an Algorithm Visualization Web CommunityAlon, Alexander Joel Dacara 13 September 2010 (has links)
Algorithm visualizations (AVs) have become a popular teaching aid in classes on algorithms and data structures. The AlgoViz Project attempts to provide an online venue for educators, students, developers,researchers, and other AV users.
The Project is comprised of two websites. The first, the AlgoViz Portal, provides two major informational resources: an AV catalog that provides both descriptive and evaluative metadata of indexed visualizations, and an annotated bibliography of research literature. Both resources have over 500 entries and are actively updated by the AV community. The Portal also provides field reports, discussion forums, and other community-building mechanisms. The second website, OpenAlgoViz, is a SourceForge site intended to showcase exemplary AVs, as well as provide logistical and hosting support to AV developers. / Master of Science
|
436 |
Partitioning Methods and Algorithms for Configurable Computing MachinesChandrasekhar, Suresh 18 August 1998 (has links)
This thesis addresses the partitioning problem for configurable computing machines. Specifically, this thesis presents algorithms to partition chain-structured task graphs across configurable computing machines. The algorithms give optimal solutions for throughput and total execution time for these problems under constraints on area, pin count, and power consumption. The algorithms provide flexibility for applying these constraints while remaining polynomial in complexity. Proofs of correctness as well as an analysis of runtime complexity are given. Experiments are performed to illustrate the runtime of these algorithms. / Master of Science
|
437 |
Thermal Characterization of Complex Aerospace StructuresHanuska, Alexander Robert Jr. 24 April 1998 (has links)
Predicting the performance of complex structures exposed to harsh thermal environments is a crucial issue in many of today's aerospace and space designs. To predict the thermal stresses a structure might be exposed to, the thermal properties of the independent materials used in the design of the structure need to be known. Therefore, a noninvasive estimation procedure involving Genetic Algorithms was developed to determine the various thermal properties needed to adequately model the Outer Wing Subcomponent (OWS), a structure located at the trailing edge of the High Speed Civil Transport's (HSCT) wing tip.
Due to the nature of the nonlinear least-squares estimation method used in this study, both theoretical and experimental temperature histories were required. Several one-dimensional and two-dimensional finite element models of the OWS were developed to compute the transient theoretical temperature histories. The experimental data were obtained from optimized experiments that were run at various surrounding temperature settings to investigate the temperature dependence of the estimated properties. An experimental optimization was performed to provide the most accurate estimates and reduce the confidence intervals.
The simultaneous estimation of eight thermal properties, including the volumetric heat capacities and out-of-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes, was successfully completed with the one-dimensional model and the results used to evaluate the remaining in-plane thermal conductivities of the facesheets, the honeycomb, the skins, and the torque tubes with the two-dimensional model. Although experimental optimization did not eliminate all correlation between the parameters, the minimization procedure based on the Genetic Algorithm performed extremely well, despite the high degree of correlation and low sensitivity of many of the parameters. / Master of Science
|
438 |
Knowledge-Discovery Incorporated Evolutionary Search for Microcalcification Detection in Breast Cancer Diagnosis.Peng, Yonghong, Yao, Bin, Jiang, Jianmin January 2006 (has links)
No / Objectives
The presence of microcalcifications (MCs), clusters of tiny calcium deposits that appear as small bright spots in a mammogram, has been considered as a very important indicator for breast cancer diagnosis. Much research has been performed for developing computer-aided systems for the accurate identification of MCs, however, the computer-based automatic detection of MCs has been shown difficult because of the complicated nature of surrounding of breast tissue, the variation of MCs in shape, orientation, brightness and size.
Methods and materials
This paper presents a new approach for the effective detection of MCs by incorporating a knowledge-discovery mechanism in the genetic algorithm (GA). In the proposed approach, called knowledge-discovery incorporated genetic algorithm (KD-GA), the genetic algorithm is used to search for the bright spots in mammogram and a knowledge-discovery mechanism is integrated to improve the performance of the GA. The function of the knowledge-discovery mechanism includes evaluating the possibility of a bright spot being a true MC, and adaptively adjusting the associated fitness values. The adjustment of fitness is to indirectly guide the GA to extract the true MCs and eliminate the false MCs (FMCs) accordingly.
Results and conclusions
The experimental results demonstrate that the incorporation of knowledge-discovery mechanism into the genetic algorithm is able to eliminate the FMCs and produce improved performance comparing with the conventional GA methods. Furthermore, the experimental results show that the proposed KD-GA method provides a promising and generic approach for the development of computer-aided diagnosis for breast cancer.
|
439 |
Cross-Platform Cloth Simulation API for GamesTang, W., Sagi, A.S., Green, D., Wan, Tao Ruan 04 June 2016 (has links)
No / Physics simulation is an active research topic in games, because without realistic physics, even the most beautiful game feels static and lifeless. Although cloth simulation is not new in computer graphics, high level of details of cloth in video games is rare and mostly coarse due to complex and nonlinear physical properties of cloth, which requires substantial computing power necessary to simulate it in real-time. This paper presents a robust and scalable real-time cloth simulation framework by exploring a variety of modern simulation techniques to produce realistic cloth simulation for games. The framework integrates with OpenCL GPGPU library to leverage parallelism. The final result is an API for games development with enriched interactive environments.
|
440 |
[en] SYNTHESIS OF SEQUENTIAL MACHINES USING HEURISTIC PROCESSES SHIFTED REGISTERS / [pt] SÍNTESE DE MÁQUINAS SEQÜENCIAIS POR REGISTROS DE DESLOCAMENTO UTILIZANDO PROCESSOS HEURÍSTICOSIVAN BRIL 11 June 2007 (has links)
[pt] O trabalho consiste de um algoritmo, programado na
linguagem LISP/360, que sintetiza uma máquina seqüencial
por registros de deslocamento utilizando processos
heurísticos. É feita a designação dos estados da máquina
e
os registros são de mesmo comprimento. Uma parte deste
algoritmo consiste de um processo de adição de estados
transitórios, tornando qualquer máquina realizável por
registros de comprimento - 2. / [en] This paper is concerned with the equal-length shift-
register realization of sequential machines. The state
assignement is restricted to one code per state. An
algorithm programed in LISP/360 is used, which synthesizes
a given sequantial machine by shift registers. A part of
this algorithm does an addition of transitory states,
making possible the synthesis of every sequential machine
by shift-registers of length 2.
|
Page generated in 0.0489 seconds