• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Reconstruction and Visualization of Polyhedra Using Projections

Hasan, Masud January 2005 (has links)
Two types of problems are studied in this thesis: reconstruction and visualization of polygons and polyhedra. <br /><br /> Three problems are considered in reconstruction of polygons and polyhedra, given a set of projection characteristics. The first problem is to reconstruct a closed convex polygon (polyhedron) given the number of visible edges (faces) from each of a set of directions <em>S</em>. The main results for this problem include the necessary and sufficient conditions for the existence of a polygon that realizes the projections. This characterization gives an algorithm to construct a feasible polygon when it exists. The other main result is an algorithm to find the maximum and minimum size of a feasible polygon for the given set <em>S</em>. Some special cases for non-convex polygons and for perspective projections are also studied. <br /><br /> For reconstruction of polyhedra, it is shown that when the projection directions are co-planar, a feasible polyhedron (i. e. a polyhedron satisfying the projection properties) can be constructed from a feasible polygon and vice versa. When the directions are covered by two planes, if the number of visible faces from each of the directions is at least four, then an algorithm is presented to decide the existence of a feasible polyhedron and to construct one, when it exists. When the directions see arbitrary number of faces, the same algorithm works, except for a particular sub-case. <br /><br /> A polyhedron is, in general, called equiprojective, if from any direction the size of the projection or the projection boundary is fixed, where the "size" means the number of vertices, edge, or faces. A special problem on reconstruction of polyhedra is to find all equiprojective polyhedra. For the case when the size is the number of vertices in the projection boundary, main results include the characterization of all equiprojective polyhedra and an algorithm to recognize them, and finding the minimum equiprojective polyhedra. Other measures of equiprojectivity are also studied. <br /><br /> Finally, the problem of efficient visualization of polyhedra under given constraints is considered. A user might wish to find a projection that highlights certain properties of a polyhedron. In particular, the problem considered is given a set of vertices, edges, and/or faces of a convex polyhedron, how to determine all projections of the polyhedron such that the elements of the given set are on the projection boundary. The results include efficient algorithms for both perspective and orthogonal projections, and improved adaptive algorithm when only edges are given and they form disjoint paths. A related problem of finding all projections where the given edges, faces, and/or vertices are not on the projection boundary is also studied.
692

A Collapsing Method for Efficient Recovery of Optimal Edges

Hu, Mike January 2002 (has links)
In this thesis we present a novel algorithm, <I>HyperCleaning*</I>, for effectively inferring phylogenetic trees. The method is based on the quartet method paradigm and is guaranteed to recover the best supported edges of the underlying phylogeny based on the witness quartet set. This is performed efficiently using a collapsing mechanism that employs memory/time tradeoff to ensure no loss of information. This enables <I>HyperCleaning*</I> to solve the relaxed version of the Maximum-Quartet-Consistency problem feasibly, thus providing a valuable tool for inferring phylogenies using quartet based analysis.
693

Interior-Point Algorithms Based on Primal-Dual Entropy

Luo, Shen January 2006 (has links)
We propose a family of search directions based on primal-dual entropy in the context of interior point methods for linear programming. This new family contains previously proposed search directions in the context of primal-dual entropy. We analyze the new family of search directions by studying their primal-dual affine-scaling and constant-gap centering components. We then design primal-dual interior-point algorithms by utilizing our search directions in a homogeneous and self-dual framework. We present iteration complexity analysis of our algorithms and provide the results of computational experiments on NETLIB problems.
694

Congestion Control for Adaptive Satellite Communication Systems with Intelligent Systems

Vallamsundar, Banupriya January 2007 (has links)
With the advent of life critical and real-time services such as remote operations over satellite, e-health etc, providing the guaranteed minimum level of services at every ground terminal of the satellite communication system has gained utmost priority. Ground terminals and the hub are not equipped with the required intelligence to predict and react to inclement and dynamic weather conditions on its own. The focus of this thesis is to develop intelligent algorithms that would aid in adaptive management of the quality of service at the ground terminal and the gateway level. This is done to adapt both the ground terminal and gateway to changing weather conditions and to attempt to maintain a steady throughput level and Quality of Service (QoS) requirements on queue delay, jitter, and probability of loss of packets. The existing satellite system employs the First-In-First-Out routing algorithm to control congestion in their networks. This mechanism is not equipped with adequate ability to contend with changing link capacities, a common result due to bad weather and faults and to provide different levels of prioritized service to the customers that satisfies QoS requirements. This research proposes to use the reported strength of fuzzy logic in controlling highly non-linear and complex system such as the satellite communication network. The proposed fuzzy based model when integrated into the satellite gateway provides the needed robustness to the ground terminals to comprehend with varying levels of traffic and dynamic impacts of weather.
695

Error Detection in Number-Theoretic and Algebraic Algorithms

Vasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
696

Laser-initiated Coulomb explosion imaging of small molecules

Brichta, Jean-Paul Otto January 2008 (has links)
Momentum vectors of fragment ions produced by the Coulomb explosion of CO2z+ (z = 3 - 6) and CS2z+ (z = 3 - 13) in an intense laser field (~50 fs, 1 x 1015 W/cm2) are determined by the triple coincidence imaging technique. The molecular structure from symmetric and asymmetric explosion channels is reconstructed from the measured momentum vectors using a novel simplex algorithm that can be extended to study larger molecules. Physical parameters such as bend angle and bond lengths are extracted from the data and are qualitatively described using an enhanced ionization model that predicts the laser intensity required for ionization as a function of bond length using classical, over the barrier arguments. As a way of going beyond the classical model, molecular ionization is examined using a quantum-mechanical, wave function modified ADK method. The ADK model is used to calculate the ionization rates of H2, N2, and CO2 as a function of initial vibrational level of the molecules. A strong increase in the ionization rate, with vibrational level, is found for H2, while N2 and CO2 show a lesser increase. The prospects for using ionization rates as a diagnostic for vibrational level population are assessed.
697

Intelligent Scheduling of Medical Procedures

Sui, Yang January 2009 (has links)
In the Canadian universal healthcare system, public access to care is not limited by monetary or social economic factors. Rather, waiting time is the dominant factor limiting public access to healthcare. Excessive waiting lowers quality of life while waiting, and worsening of condition during the delay, which could lower the effectiveness of the planned operation. Excessive waiting has also been shown to carry economic cost. At the core of the wait time problem is a resource scheduling and management issue. The scheduling of medical procedures is a complex and difficult task. The goal of research in this thesis is to develop the foundation models and algorithms for a resource optimization system. Such a system will help healthcare administrators intelligently schedule procedures to optimize resource utilization, identify bottlenecks and reduce patient wait times. This thesis develops a novel framework, the MPSP model, to model medical procedures. The MPSP model is designed to be general and versatile to model a variety of different procedures. The specific procedure modeled in detail in this thesis is the haemodialysis procedure. Solving the MPSP model exactly to obtain guaranteed optimal solutions is computationally expensive and not practical for real-time scheduling. A fast, high quality evolutionary heuristic, gMASH, is developed to quickly solve large problems. The MPSP model and the gMASH heuristic form a foundation for an intelligent medical procedures scheduling and optimization system.
698

Computing sparse multiples of polynomials

Tilak, Hrushikesh 20 August 2010 (has links)
We consider the problem of finding a sparse multiple of a polynomial. Given a polynomial f ∈ F[x] of degree d over a field F, and a desired sparsity t = O(1), our goal is to determine if there exists a multiple h ∈ F[x] of f such that h has at most t non-zero terms, and if so, to find such an h. When F = Q, we give a polynomial-time algorithm in d and the size of coefficients in h. For finding binomial multiples we prove a polynomial bound on the degree of the least degree binomial multiple independent of coefficient size. When F is a finite field, we show that the problem is at least as hard as determining the multiplicative order of elements in an extension field of F (a problem thought to have complexity similar to that of factoring integers), and this lower bound is tight when t = 2.
699

Storage management for large scale systems

Wang, Wenguang 15 December 2004 (has links)
<p>Because of the slow access time of disk storage, storage management is crucial to the performance of many large scale computer systems. This thesis studies performance issues in buffer cache management and disk layout management, two important components of storage management. </p><p>The buffer cache stores popular disk pages in memory to speed up the access to them. Buffer cache management algorithms used in real systems often have many parameters that require careful hand-tuning to get good performance. A self-tuning algorithm is proposed to automatically tune the page cleaning activity in the buffer cache management algorithm by monitoring the I/O activities of the buffer cache. This algorithm achieves performance comparable to the best manually tuned system.</p><p>The global data structure used by the buffer cache management algorithm is protected by a lock. Access to this lock can cause contention which can significantly reduce system throughput in multi-processor systems. Current solutions to eliminate lock contention decrease the hit ratio of the buffer cache, which causes poor performance when the system is I/O-bound. A new approach, called the multi-region cache, is proposed. This approach eliminates lock contention, maintains the hit ratio of the buffer cache, and incurs little overhead. Moreover, this approach can be applied to most buffer cache management algorithms.</p><p>Disk layout management arranges the layout of pages on disks to improve the disk I/O efficiency. The typical disk layout approach, called Overwrite, is optimized for sequential I/Os from a single file. Interleaved writes from multiple users can significantly decrease system throughput in large scale systems using Overwrite. Although the Log-structured File System (LFS) is optimized for such workloads, its garbage collection overhead can be expensive. In modern and future disks, because of the much faster improvement of disk transfer bandwidth over disk positioning time, LFS performs much better than Overwrite in most workloads, unless the disk is close to full. A new disk layout approach, called HyLog, is proposed. HyLog achieves performance comparable to the best of existing disk layout approaches in most cases.
700

Hardware implementation of daubechies wavelet transforms using folded AIQ mapping

Islam, Md Ashraful 22 September 2010 (has links)
The Discrete Wavelet Transform (DWT) is a popular tool in the field of image and video compression applications. Because of its multi-resolution representation capability, the DWT has been used effectively in applications such as transient signal analysis, computer vision, texture analysis, cell detection, and image compression. Daubechies wavelets are one of the popular transforms in the wavelet family. Daubechies filters provide excellent spatial and spectral locality-properties which make them useful in image compression.<p> In this thesis, we present an efficient implementation of a shared hardware core to compute two 8-point Daubechies wavelet transforms. The architecture is based on a new two-level folded mapping technique, an improved version of the Algebraic Integer Quantization (AIQ). The scheme is developed on the factorization and decomposition of the transform coefficients that exploits the symmetrical and wrapping structure of the matrices. The proposed architecture is parallel, pipelined, and multiplexed. Compared to existing designs, the proposed scheme reduces significantly the hardware cost, critical path delay and power consumption with a higher throughput rate.<p> Later, we have briefly presented a new mapping scheme to error-freely compute the Daubechies-8 tap wavelet transform, which is the next transform of Daubechies-6 in the Daubechies wavelet series. The multidimensional technique maps the irrational transformation basis coefficients with integers and results in considerable reduction in hardware and power consumption, and significant improvement in image reconstruction quality.

Page generated in 0.0287 seconds