Spelling suggestions: "subject:"computer algorithms."" "subject:"aomputer algorithms.""
641 |
Improved I/O pad positions assignment algorithm for sea-of-gates placementHer, Shyang-Kuen 01 January 1992 (has links)
A new heuristic method to improve the I/O pad assignment for the sea-of-gates placement algorithm "PROUD" is proposed. In PROUD, the preplaced I/O pads are used as the boundary conditions in solving sparse linear equations to obtain the optimal module placement. Due to the total wire length determined by the module positions is the strong function of the preplaced I/O pad positions, the optimization of the I/O pad circular order and their assignment to the physical locations on the chip are attempted in the thesis. The proposed I/O pad assignment program is used as a predecessor of PROUD. The results have revealed excellent improvement.
|
642 |
Compiling ACE for Distributed-Memory MachinesSong, Jun 05 November 1992 (has links)
Distributed-memory machines offer a very high level of performance, flexibility and scalability. But the memory organization of this kind of machine determines that processes on different processors must communicate explicitly by sending and receiving messages. As a result, the programmer faces the enormously difficult task of detailed planning of algorithm-irrelevant, low-level communication issues. This level of programming resembles writing assembly programs for a sequential machine. ACE is a message-passing language with abstract communication statements. It was defined by Dr. Jingke Li at Portland State University. The communication in ACE is still explicit, but it is abstracted to a higher level. The abstraction can help balance the needs of ease of programming and high performance. This thesis discusses how those high-level communication abstractions can be transformed into low-level communication routines. It presents the design and implementation of a compiler that transforms an ACE program into a C program with low-level communication routines. The compiler is implemented for the Intel iPSC/2 hypercube multiprocessor machine. Compared to their low-level counterparts, ACE programs are easier to write and are more understandable. Compared to their high level counterparts, more efficient code can be generated since the communication information is expressed explicitly in ACE and the compiler itself is much less complex. ACE also enables the users to fine tune some critical communication segments. Some well known parallel algorithms written in ACE are compiled by the compiler as examples, and experimental results of their performance are included.
|
643 |
Encoding and parsing of algebraic expressions by experienced users of mathematicsJansen, Anthony Robert, 1973- January 2002 (has links)
Abstract not available
|
644 |
Efficient computational approach to identifying overlapping documents in large digital collectionsMonostori, Krisztian, 1975- January 2002 (has links)
Abstract not available
|
645 |
A multi-fidelity analysis selection method using a constrained discrete optimization formulationStults, Ian Collier 17 August 2009 (has links)
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method that will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results.
|
646 |
Multi-tree Monte Carlo methods for fast, scalable machine learningHolmes, Michael P. 09 January 2009 (has links)
As modern applications of machine learning and data mining are forced to deal with ever more massive quantities of data, practitioners quickly run into difficulty with the scalability of even the most basic and fundamental methods. We propose to provide scalability through a marriage between classical, empirical-style Monte Carlo approximation and deterministic multi-tree techniques. This union entails a critical compromise: losing determinism in order to gain speed. In the face of large-scale data, such a compromise is arguably often not only the right but the only choice. We refer to this new approximation methodology as Multi-Tree Monte Carlo. In particular, we have developed the following fast approximation methods:
1. Fast training for kernel conditional density estimation, showing speedups as high as 10⁵ on up to 1 million points.
2. Fast training for general kernel estimators (kernel density estimation, kernel regression, etc.), showing speedups as high as 10⁶ on tens of millions of points.
3. Fast singular value decomposition, showing speedups as high as 10⁵ on matrices containing billions of entries.
The level of acceleration we have shown represents improvement over the prior state of the art by several orders of magnitude. Such improvement entails a qualitative shift, a commoditization, that opens doors to new applications and methods that were previously invisible, outside the realm of practicality. Further, we show how these particular approximation methods can be unified in a Multi-Tree Monte Carlo meta-algorithm which lends itself as scaffolding to the further development of new fast approximation methods. Thus, our contribution includes not just the particular algorithms we have derived but also the Multi-Tree Monte Carlo methodological framework, which we hope will lead to many more fast algorithms that can provide the kind of scalability we have shown here to other important methods from machine learning and related fields.
|
647 |
Increasing the efficiency of network interface cardUppal, Amit, January 2007 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
648 |
The vehicle routing problem with simultaneous pick-up and deliveries and a GRASP-GA based solution heuristicVural, Arif Volkan. January 2007 (has links)
Thesis (Ph.D.)--Mississippi State University. Department of Industrial and Systems Engineering. / Title from title screen. Includes bibliographical references.
|
649 |
Non-equilibrium surface growth for competitive growth models and applications to conservative parallel discrete event simulationsVerma, Poonam Santosh. January 2007 (has links)
Thesis (Ph.D.)--Mississippi State University. Department of Physics and Astronomy. / Title from title screen. Includes bibliographical references.
|
650 |
Multi-tree algorithms for computational statistics and phyiscsMarch, William B. 20 September 2013 (has links)
The Fast Multipole Method of Greengard and Rokhlin does the seemingly impossible: it approximates the quadratic scaling N-body problem in linear time. The key is to avoid explicitly computing the interactions between all pairs of N points. Instead, by organizing the data in a space-partitioning tree, distant interactions are quickly and efficiently approximated. Similarly, dual-tree algorithms, which approximate or eliminate parts of a computation using distance bounds, are the fastest algorithms for several fundamental problems in statistics and machine learning -- including all nearest neighbors, kernel density estimation, and Euclidean minimum spanning tree construction.
We show that this overarching principle -- that by organizing points spatially, we can solve a seemingly quadratic problem in linear time -- can be generalized to problems involving interactions between sets of three or more points and can provide orders-of-magnitude speedups and guarantee runtimes that are asymptotically better than existing algorithms. We describe a family of algorithms, multi-tree algorithms, which can be viewed as generalizations of dual-tree algorithms. We support this thesis by developing and implementing multi-tree algorithms for two fundamental scientific applications: n-point correlation function estimation and Hartree-Fock theory.
First, we demonstrate multi-tree algorithms for n-point correlation function estimation. The n-point correlation functions are a family of fundamental spatial statistics and are widely used for understanding large-scale astronomical surveys, characterizing the properties of new materials at the microscopic level, and for segmenting and processing images. We present three new algorithms which will reduce the dependence of the computation on the size of the data, increase the resolution in the result without additional time, and allow probabilistic estimates independent of the problem size through sampling. We provide both empirical evidence to support our claim of massive speedups and a theoretical analysis showing linear scaling in the fundamental computational task. We demonstrate the impact of a carefully optimized base case on this computation and describe our distributed, scalable, open-source implementation of our algorithms.
Second, we explore multi-tree algorithms as a framework for understanding the bottleneck computation in Hartree-Fock theory, a fundamental model in computational chemistry. We analyze existing fast algorithms for this problem, and show how they fit in our multi-tree framework. We also show new multi-tree methods, demonstrate that they are competitive with existing methods, and provide the first rigorous guarantees for the runtimes of all of these methods. Our algorithms will appear as part of the PSI4 computational chemistry library.
|
Page generated in 0.0738 seconds