• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2628
  • 942
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 6015
  • 1462
  • 893
  • 731
  • 726
  • 709
  • 498
  • 495
  • 487
  • 455
  • 422
  • 414
  • 386
  • 366
  • 343
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Design and Analysis of Intelligent Fuzzy Tension Controllers for Rolling Mills

Liu, Jingrong January 2002 (has links)
This thesis presents a fuzzy logic controller aimed at maintaining constant tension between two adjacent stands in tandem rolling mills. The fuzzy tension controller monitors tension variation by resorting to electric current comparison of different operation modes and sets the reference for speed controller of the upstream stand. Based on modeling the rolling stand as a single input single output linear discrete system, which works in the normal mode and is subject to internal and external noise, the element settings and parameter selections in the design of the fuzzy controller are discussed. To improve the performance of the fuzzy controller, a dynamic fuzzy controller is proposed. By switching the fuzzy controller elements in relation to the step response, both transient and stationary performances are enhanced. To endow the fuzzy controller with intelligence of generalization, flexibility and adaptivity, self-learning techniques are introduced to obtain fuzzy controller parameters. With the inclusion of supervision and concern for conventional control criteria, the parameters of the fuzzy inference system are tuned by a backward propagation algorithm or their optimal values are located by means of a genetic algorithm. In simulations, the neuro-fuzzy tension controller exhibits the real-time applicability, while the genetic fuzzy tension controller reveals an outstanding global optimization ability.
692

Evaluation of Shortest Path Query Algorithm in Spatial Databases

Lim, Heechul January 2003 (has links)
Many variations of algorithms for finding the shortest path in a large graph have been introduced recently due to the needs of applications like the Geographic Information System (GIS) or Intelligent Transportation System (ITS). The primary subjects of those algorithms are materialization and hierarchical path views. Some studies focus on the materialization and sacrifice the pre-computational costs and storage costs for faster computation of a query. Other studies focus on the shortest-path algorithm, which has less pre-computation and storage but takes more time to compute the shortest path. The main objective of this thesis is to accelerate the computation time for the shortest-path queries while keeping the degree of materialization as low as possible. This thesis explores two different categories: 1) the reduction of the I/O-costs for multiple queries, and 2) the reduction of search spaces in a graph. The thesis proposes two simple algorithms to reduce the I/O-costs, especially for multiple queries. To tackle the problem of reducing search spaces, we give two different levels of materializations, namely, the <i>boundary set distance matrix</i> and <i>x-Hop sketch graph</i>, both of which materialize the shortest-path view of the boundary nodes in a partitioned graph. Our experiments show that a combination of the suggested solutions for 1) and 2) performs better than the original Disk-based SP algorithm [7], on which our work is based, and requires much less storage than <i>HEPV</i> [3].
693

Reconstruction and Visualization of Polyhedra Using Projections

Hasan, Masud January 2005 (has links)
Two types of problems are studied in this thesis: reconstruction and visualization of polygons and polyhedra. <br /><br /> Three problems are considered in reconstruction of polygons and polyhedra, given a set of projection characteristics. The first problem is to reconstruct a closed convex polygon (polyhedron) given the number of visible edges (faces) from each of a set of directions <em>S</em>. The main results for this problem include the necessary and sufficient conditions for the existence of a polygon that realizes the projections. This characterization gives an algorithm to construct a feasible polygon when it exists. The other main result is an algorithm to find the maximum and minimum size of a feasible polygon for the given set <em>S</em>. Some special cases for non-convex polygons and for perspective projections are also studied. <br /><br /> For reconstruction of polyhedra, it is shown that when the projection directions are co-planar, a feasible polyhedron (i. e. a polyhedron satisfying the projection properties) can be constructed from a feasible polygon and vice versa. When the directions are covered by two planes, if the number of visible faces from each of the directions is at least four, then an algorithm is presented to decide the existence of a feasible polyhedron and to construct one, when it exists. When the directions see arbitrary number of faces, the same algorithm works, except for a particular sub-case. <br /><br /> A polyhedron is, in general, called equiprojective, if from any direction the size of the projection or the projection boundary is fixed, where the "size" means the number of vertices, edge, or faces. A special problem on reconstruction of polyhedra is to find all equiprojective polyhedra. For the case when the size is the number of vertices in the projection boundary, main results include the characterization of all equiprojective polyhedra and an algorithm to recognize them, and finding the minimum equiprojective polyhedra. Other measures of equiprojectivity are also studied. <br /><br /> Finally, the problem of efficient visualization of polyhedra under given constraints is considered. A user might wish to find a projection that highlights certain properties of a polyhedron. In particular, the problem considered is given a set of vertices, edges, and/or faces of a convex polyhedron, how to determine all projections of the polyhedron such that the elements of the given set are on the projection boundary. The results include efficient algorithms for both perspective and orthogonal projections, and improved adaptive algorithm when only edges are given and they form disjoint paths. A related problem of finding all projections where the given edges, faces, and/or vertices are not on the projection boundary is also studied.
694

A Collapsing Method for Efficient Recovery of Optimal Edges

Hu, Mike January 2002 (has links)
In this thesis we present a novel algorithm, <I>HyperCleaning*</I>, for effectively inferring phylogenetic trees. The method is based on the quartet method paradigm and is guaranteed to recover the best supported edges of the underlying phylogeny based on the witness quartet set. This is performed efficiently using a collapsing mechanism that employs memory/time tradeoff to ensure no loss of information. This enables <I>HyperCleaning*</I> to solve the relaxed version of the Maximum-Quartet-Consistency problem feasibly, thus providing a valuable tool for inferring phylogenies using quartet based analysis.
695

Interior-Point Algorithms Based on Primal-Dual Entropy

Luo, Shen January 2006 (has links)
We propose a family of search directions based on primal-dual entropy in the context of interior point methods for linear programming. This new family contains previously proposed search directions in the context of primal-dual entropy. We analyze the new family of search directions by studying their primal-dual affine-scaling and constant-gap centering components. We then design primal-dual interior-point algorithms by utilizing our search directions in a homogeneous and self-dual framework. We present iteration complexity analysis of our algorithms and provide the results of computational experiments on NETLIB problems.
696

Congestion Control for Adaptive Satellite Communication Systems with Intelligent Systems

Vallamsundar, Banupriya January 2007 (has links)
With the advent of life critical and real-time services such as remote operations over satellite, e-health etc, providing the guaranteed minimum level of services at every ground terminal of the satellite communication system has gained utmost priority. Ground terminals and the hub are not equipped with the required intelligence to predict and react to inclement and dynamic weather conditions on its own. The focus of this thesis is to develop intelligent algorithms that would aid in adaptive management of the quality of service at the ground terminal and the gateway level. This is done to adapt both the ground terminal and gateway to changing weather conditions and to attempt to maintain a steady throughput level and Quality of Service (QoS) requirements on queue delay, jitter, and probability of loss of packets. The existing satellite system employs the First-In-First-Out routing algorithm to control congestion in their networks. This mechanism is not equipped with adequate ability to contend with changing link capacities, a common result due to bad weather and faults and to provide different levels of prioritized service to the customers that satisfies QoS requirements. This research proposes to use the reported strength of fuzzy logic in controlling highly non-linear and complex system such as the satellite communication network. The proposed fuzzy based model when integrated into the satellite gateway provides the needed robustness to the ground terminals to comprehend with varying levels of traffic and dynamic impacts of weather.
697

Error Detection in Number-Theoretic and Algebraic Algorithms

Vasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
698

Laser-initiated Coulomb explosion imaging of small molecules

Brichta, Jean-Paul Otto January 2008 (has links)
Momentum vectors of fragment ions produced by the Coulomb explosion of CO2z+ (z = 3 - 6) and CS2z+ (z = 3 - 13) in an intense laser field (~50 fs, 1 x 1015 W/cm2) are determined by the triple coincidence imaging technique. The molecular structure from symmetric and asymmetric explosion channels is reconstructed from the measured momentum vectors using a novel simplex algorithm that can be extended to study larger molecules. Physical parameters such as bend angle and bond lengths are extracted from the data and are qualitatively described using an enhanced ionization model that predicts the laser intensity required for ionization as a function of bond length using classical, over the barrier arguments. As a way of going beyond the classical model, molecular ionization is examined using a quantum-mechanical, wave function modified ADK method. The ADK model is used to calculate the ionization rates of H2, N2, and CO2 as a function of initial vibrational level of the molecules. A strong increase in the ionization rate, with vibrational level, is found for H2, while N2 and CO2 show a lesser increase. The prospects for using ionization rates as a diagnostic for vibrational level population are assessed.
699

Intelligent Scheduling of Medical Procedures

Sui, Yang January 2009 (has links)
In the Canadian universal healthcare system, public access to care is not limited by monetary or social economic factors. Rather, waiting time is the dominant factor limiting public access to healthcare. Excessive waiting lowers quality of life while waiting, and worsening of condition during the delay, which could lower the effectiveness of the planned operation. Excessive waiting has also been shown to carry economic cost. At the core of the wait time problem is a resource scheduling and management issue. The scheduling of medical procedures is a complex and difficult task. The goal of research in this thesis is to develop the foundation models and algorithms for a resource optimization system. Such a system will help healthcare administrators intelligently schedule procedures to optimize resource utilization, identify bottlenecks and reduce patient wait times. This thesis develops a novel framework, the MPSP model, to model medical procedures. The MPSP model is designed to be general and versatile to model a variety of different procedures. The specific procedure modeled in detail in this thesis is the haemodialysis procedure. Solving the MPSP model exactly to obtain guaranteed optimal solutions is computationally expensive and not practical for real-time scheduling. A fast, high quality evolutionary heuristic, gMASH, is developed to quickly solve large problems. The MPSP model and the gMASH heuristic form a foundation for an intelligent medical procedures scheduling and optimization system.
700

Computing sparse multiples of polynomials

Tilak, Hrushikesh 20 August 2010 (has links)
We consider the problem of finding a sparse multiple of a polynomial. Given a polynomial f ∈ F[x] of degree d over a field F, and a desired sparsity t = O(1), our goal is to determine if there exists a multiple h ∈ F[x] of f such that h has at most t non-zero terms, and if so, to find such an h. When F = Q, we give a polynomial-time algorithm in d and the size of coefficients in h. For finding binomial multiples we prove a polynomial bound on the degree of the least degree binomial multiple independent of coefficient size. When F is a finite field, we show that the problem is at least as hard as determining the multiplicative order of elements in an extension field of F (a problem thought to have complexity similar to that of factoring integers), and this lower bound is tight when t = 2.

Page generated in 0.0306 seconds