• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 35
  • 35
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A divide-and-conquer method for 3D capacitance extraction

Yu, Fangqing 30 September 2004 (has links)
This thesis describes a divide-and-conquer algorithm to improve the 3D boundary element method (BEM) for capacitance extraction. We divide large interconnect structures into small sections, set new boundary conditions using the borderfor each section, solve each section, and then combine the results to derive the capacitance. The target application is critical nets where 3D accuracy is required. The new algorithm is a significant improvement over the traditional BEMs and their enhancements, such as the "window" method where conductors far away are dropped, and the "shield" method where conductors hidden behind other conductors are dropped. Experimental results show that our algorithm is 25 times faster than the traditional BEM and 5 times faster than the window+shield method, for medium to large structures. The error of the capacitance computed by the new algorithm is within 2% for self capacitance and 7% for coupling capacitance, compared with the results obtained by solving the entire system using BEM. Furthermore, our algorithms gives accurate distributed RC, where none of the previous 3D BEM algorithms and their enhancements can.
2

A divide-and-conquer method for 3D capacitance extraction

Yu, Fangqing 30 September 2004 (has links)
This thesis describes a divide-and-conquer algorithm to improve the 3D boundary element method (BEM) for capacitance extraction. We divide large interconnect structures into small sections, set new boundary conditions using the borderfor each section, solve each section, and then combine the results to derive the capacitance. The target application is critical nets where 3D accuracy is required. The new algorithm is a significant improvement over the traditional BEMs and their enhancements, such as the "window" method where conductors far away are dropped, and the "shield" method where conductors hidden behind other conductors are dropped. Experimental results show that our algorithm is 25 times faster than the traditional BEM and 5 times faster than the window+shield method, for medium to large structures. The error of the capacitance computed by the new algorithm is within 2% for self capacitance and 7% for coupling capacitance, compared with the results obtained by solving the entire system using BEM. Furthermore, our algorithms gives accurate distributed RC, where none of the previous 3D BEM algorithms and their enhancements can.
3

Algorithmic decision support for train scheduling in a large and highly utilised railway network

Caimi, Gabrio Curzio January 2009 (has links)
Zugl.: Zürich, Techn. Hochsch., Diss., 2009
4

On the Structured Eigenvalue Problem: Methods, Analysis, and Applications

James P. Vogel (5930360) 17 January 2019 (has links)
<div>This PhD thesis is an important development in the theories, methods, and applications of eigenvalue algorithms for structured matrices. Though eigenvalue problems have been well-studied, the class of matrices that admit very fast (near-linear time) algorithms was quite small until very recently. We developed and implemented a generalization of the famous symmetric tridiagonal divide-and-conquer algorithm to a much larger class of rank structured matrices (symmetric hierarchically semisperable, or HSS) that appear frequently in applications. Altogether, this thesis makes valuable contributions to three different major areas of scientific computing: algorithmic development, numerical analysis, and applications. In addition to the previously stated divide-and-conquer algorithm, we generalize to larger classes of eigenvalue problems and provide several key new low-rank update algorithms. A major contribution the analysis of the structured eigenvalue problem. In addition to standard perturbation analysis, we elucidate some subtle and previously under-examined issues in structured matrix eigenvalue problems such as subspace contributions and secular equation conditioning. Finally, several applications are studied.</div>
5

Chinese Input Method Based on First Mandarin Phonetic Alphabet for Mobile Devices and an Approach in Speaker Diarization with Divide-and-Conquer

Tseng, Chun-han 09 September 2008 (has links)
There are two research topics in this thesis. First, we implement a highly efficient Chinese input method. Second, we apply a divide-and-conquer scheme to the speaker diarization problem. The implemented Chinese input method transforms an input first-symbol sequence into a character string (a sentence). This means that a user only needs to input a first Mandarin phonetic symbol per character, which is very efficient compared to the current methods. The implementation is based on a dynamic programming scheme and language models. To reduce time complexity, the vocabulary for the language model consists of 1-, 2-, and 3-character words only. The speaker diarization system consists of segmentation and clustering modules. The divide-and-conquer scheme is essentially implemented in the clustering module. We evaluate the performance of our system using the speaker diarization score defined in the 2003 Rich Transcription Evaluation Plan. Compared to the baseline, our method significantly reduces the processing time without compromising diarization accuracy.
6

Enhancing the capabilities of computational chemistry using GPU technology

Needham, Perri January 2013 (has links)
Three key enhancements were made to a semiempirical molecular orbital program to develop a fast, accurate method of calculating chemical properties of large (> 1000 atom) molecular systems, through the use of quantum theory. In this thesis the key enhancements are presented which are: the implementation of a divide-and-conquer approach to a self-consistent field procedure, in an effort to improve capability; the use of the novel technology, GPU technology, to parallelize the divide-and-conquer self-consistent field procedure, in an effort to improve the speed; the implementation of a newly developed semiempirical model, the Polarized Molecular Orbital Model, in an effort to improve the accuracy. The development of a divide-and-conquer approach to the SCF (DC-SCF) procedure (enhancement 1) was carried out using saturated hydrocarbon chains whereby the saturated hydrocarbon chain is partitioned into small overlapping subsystems and the Roothaan equations solved for each subsystem. An investigation was carried out to find the optimal partitioning scheme for saturated hydrocarbon chains in order to minimize the loss of energy experienced from neglecting some of the interactions in the system whilst maintaining near linear scaling with system size. The DC-SCF procedure was shown to be accurate to 10-3 kcal mol-1 per atom whilst calculating the SCF-energy nearly 6 times faster than using the standard SCF procedure, for a 698-atom system. The development of a parallel DC-SCF procedure and Cartesian forces calculation for use on a GPU (enhancement 2), resulted in a hybrid CPU/GPU DC-SCF implementation that calculated the energy of a 1997-atom saturated hydrocarbon chain 21 times faster than the standard serial SCF implementation and a accelerated Cartesian forces calculation that performed 7 times faster for a saturated hydrocarbon chain of 1205-atoms, when accelerated using an NVidia Tesla C2050 GPU. The hybrid CPU/GPU algorithm made use of commercially accelerated linear algebra libraries, CULA and CUBLAS. A comparison was made between CULA’s accelerated eigensolver routine and the accelerated DC-eigensolver (developed in this research) and it was found that for saturated hydrocarbon chains of > 350 atoms, the accelerated DC-eigensolver performed around twice as fast as the accelerated CULA eigensolver. The implementation of the Polarized Molecular Orbital model (enhancement 3) was validated against published isomerization energies and benchmarked against the non-nitrogen containing complexes in the S66 database. The benchmark complexes were categorized according to dominant intermolecular interactions namely, hydrogen bonding, dispersion interactions and mixed interactions. After assessment it was found that the PMO model predicts interaction energies of complexes with a mixture of dispersive and electrostatic interactions to the highest accuracy (0.69 kcal mol-1 with respect to CCSD(T)/CBS). The dispersion correction within the PMO model was found to ‘overcorrect’ the dispersive contribution for most complexes tested. The outcome of this research is a semiempirical molecular orbital program that calculates the energy of a closed-shell saturated hydrocarbon chain of ~2000 atoms in under 4 minutes instead of 1.5 hours when using a PM3-Hamiltonian and can calculate interaction energies of systems exhibiting a mixture of electrostatic and dispersive interactions to an accuracy of within 1 kcal mol-1 (relative to high-level quantum methods). To demonstrate a suitable application for the enhanced SE-MO program, interaction energies of a series of PAHs with water, phenol and methanol have been investigated. The resultant program is suitable for use in calculating the energy and forces of large material and (in future) biological systems by a fast and accurate method that would be impractical or impossible to do without these enhancements.
7

A Multi-Dimensional Width-Bounded Geometric Separator and its Applications to Protein Folding

Oprisan, Sorinel 20 May 2005 (has links)
We used a divide-and-conquer algorithm to recursively solve the two-dimensional problem of protein folding of an HP sequence with the maximum number of H-H contacts. We derived both lower and upper bounds for the algorithmic complexity by using the newly introduced concept of multi-directional width-bounded geometric separator. We proved that for a grid graph G with n grid points P, there exists a balanced separator A subseteq P$ such that A has less than or equal to 1.02074 sqrt{n} points, and G-A has two disconnected subgraphs with less than or equal to {2over 3}n nodes on each subgraph. We also derive a 0.7555sqrt {n} lower bound for our balanced separator. Based on our multidirectional width-bounded geometric separator, we found that there is an O(n^{5.563sqrt{n}}) time algorithm for the 2D protein folding problem in the HP model. We also extended the upper bound results to rectangular and triangular lattices.
8

Advance the DNA computing

Qiu, Zhiquan Frank 30 September 2004 (has links)
It has been previously shown that DNA computing can solve those problems currently intractable on even the fastest electronic computers. The algorithm design for DNA computing, however, is not straightforward. A strong background in both the DNA molecule and computer engineering are required to develop efficient DNA computing algorithms. After Adleman solved the Hamilton Path Problem using a combinatorial molecular method, many other hard computational problems were investigated with the proposed DNA computer. The existing models from which a few DNA computing algorithms have been developed are not sufficiently powerful and robust, however, to attract potential users. This thesis has described research performed to build a new DNA computing model based on various new algorithms developed to solve the 3-Coloring problem. These new algorithms are presented as vehicles for demonstrating the advantages of the new model, and they can be expanded to solve other NP-complete problems. These new algorithms can significantly speed up computation and therefore achieve a consistently better time performance. With the given resource, these algorithms can also solve problems of a much greater size, especially as compared to existing DNA computation algorithms. The error rate can also be greatly reduced by applying these new algorithms. Furthermore, they have the advantage of dynamic updating, so an answer can be changed based on modifications made to the initial condition. This new model makes use of the huge possible memory by generating a ``lookup table'' during the implementation of the algorithms. If the initial condition changes, the answer changes accordingly. In addition, the new model has the advantage of decoding all the strands in the final pool both quickly and efficiently. The advantages provided by the new model make DNA computing an efficient and attractive means of solving computationally intense problems.
9

Shortest Path Queries in Very Large Spatial Databases

Zhang, Ning January 2001 (has links)
Finding the shortest paths in a graph has been studied for a long time, and there are many main memory based algorithms dealing with this problem. Among these, Dijkstra's shortest path algorithm is one of the most commonly used efficient algorithms to the non-negative graphs. Even more efficient algorithms have been developed recently for graphs with particular properties such as the weights of edges fall into a range of integer. All of the mentioned algorithms require the graph totally reside in the main memory. Howevery, for very large graphs, such as the digital maps managed by Geographic Information Systems (GIS), the requirement cannot be satisfied in most cases, so the algorithms mentioned above are not appropriate. My objective in this thesis is to design and evaluate the performance of external memory (disk-based) shortest path algorithms and data structures to solve the shortest path problem in very large digital maps. In particular the following questions are studied:What have other researchers done on the shortest path queries in very large digital maps?What could be improved on the previous works? How efficient are our new shortest paths algorithms on the digital maps, and what factors affect the efficiency? What can be done based on the algorithm? In this thesis, we give a disk-based Dijkstra's-like algorithm to answer shortest path queries based on pre-processing information. Experiments based on our Java implementation are given to show what factors affect the running time of our algorithms.
10

Exploiting parallelism in decomposition methods for constraint satisfaction

Akatov, Dmitri January 2010 (has links)
Constraint Satisfaction Problems (CSPs) are NP-complete in general, however, there are many tractable subclasses that rely on the restriction of the structure of their underlying hypergraphs. It is a well-known fact, for instance, that CSPs whose underlying hypergraph is acyclic are tractable. Trying to define “nearly acyclic” hypergraphs led to the definition of various hypergraph decomposition methods. An important member in this class is the hypertree decomposition method, introduced by Gottlob et al. It possesses the property that CSPs falling into this class can be solved efficiently, and that hypergraphs in this class can be recognized efficiently as well. Apart from polynomial tractability, complexity analysis has shown, that both afore-mentioned problems lie in the low complexity class LOGCFL and are thus moreover efficiently parallelizable. A parallel algorithm has been proposed for the “evaluation problem”, however all algorithms for the “recognition problem” presented to date are sequential. The main contribution of this dissertation is the creation of an object oriented programming library including a task scheduler which allows the parallelization of a whole range of computational problems, fulfilling certain complexity-theoretic restrictions. This library merely requires the programmer to provide the implementation of several classes and methods, representing a general alternating algorithm, while the mechanics of the task scheduler remain hidden. In particular, we use this library to create an efficient parallel algorithm, which computes hypertree decompositions of a fixed width. Another result of a more theoretical nature is the definition of a new type of decomposition method, called Balanced Decompositions. Solving CSPs of bounded balanced width and recognizing such hypergraphs is only quasi-polynomial, however still parallelizable to a certain extent. A complexity-theoretic analysis leads to the definition of a new complexity class hierarchy, called the DC-hierarchy, with the first class in this hierarchy, DC1 , precisely capturing the complexity of solving CSPs of bounded balanced width.

Page generated in 0.0545 seconds