• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1231

Projection Methods for Variational Inequalities Governed by Inverse Strongly Monotone Operators

Lin, Yen-Ru 26 June 2010 (has links)
Consider the variational inequality (VI) x* ∈C, ‹Fx*, x - x* ›≥0, x∈C (*) where C is a nonempty closed convex subset of a real Hilbert space H and F : C¡÷ H is a monotone operator form C into H. It is known that if F is strongly monotone and Lipschitzian, then VI (*) is equivalently turned into a fixed point problem of a contraction; hence Banach's contraction principle applies. However, in the case where F is inverse strongly monotone, VI (*) is equivalently transformed into a fixed point problem of a nonexpansive mapping. The purpose of this paper is to present some results which apply iterative methods for nonexpansive mappings to solve VI (*). We introduce Mann's algorithm and Halpern's algorithm and prove that the sequences generated by these algorithms converge weakly and respectively, strongly to a solution of VI (*), under appropriate conditions imposed on the parameter sequences in the algorithms.
1232

The Performance of the Differentially Coherent DS/SS Code Synchronization with Different Adaptive LMS Filtering Schemes

Chang, Yu-Chen 02 August 2005 (has links)
The efficiency of direct sequence spread spectrum (DS/SS) receiver is highly dependent on the accurate and fast synchronization between the incoming and locally generated PN (pseudo-noise) codes. The code synchronization is processed in two steps, acquisition (coarse alignment) and tracking (fine alignment), to estimate the delay offset between the two codes. In general, the schemes for code acquisition and tracking processes are performed, separately, and implemented with different structure. Recently, an alternative approach, with the adaptive LMS filtering scheme, has been proposed for performing both code acquisition and tracking with the identical structure, where the coherent receiver was employed. With this approach, dramatically, hardware complexity reduction could be achieved, especially, when long PN code is considered. In this thesis, a new differentially coherent code synchronization scheme, based on a differential detector followed by an adaptive constrained LMS (CLMS) filtering algorithm with maximum tap weight (MTW) test scheme, is devised for performing both code acquisition and tracking with the identical structure. With a differential detector for code synchronization, the prior knowledge of the carrier phase is not required as the non-coherent techniques. Numerical analyses and simulation results verify that the proposed scheme has better acquisition performance, in terms of mean acquisition time, than the conventional LMS filtering algorithm with MTW test and mean square error (MSE) test schemes for the integer and non-integer time delay environments. At the same time, the proposed scheme has better tracking capability, in terms of mean hold-in time and mean penalty time, over the conventional LMS filtering schemes, for the variation of signal-to-noise ratio (SNR) and delay offset (delay difference).
1233

Route Assignment for Distributed Leased Lines in Mobile Cellular Network

Huang, Yung-chia 09 July 2007 (has links)
When a large number of base stations fail due to the breakdown of some transmission circuit in a mobile cellular network, base stations located in neighboring areas may take over those malfunctioned base stations and continue to provide the access service of mobile communications for users in surrounding areas, thereby reducing the area in which mobile communications are out of service. Therefore, if leased circuits in base stations could complete the route distribution configuration prior to the onset of malfunction, it could decrease the impact of circuit breakdown and traffic loss. Also, the efficiency would be improved if the circuit assignment personnel could complete the job when the leased lines are less, while avoiding reassignment in the future and enhancing the mobile communications operations. In this study, we use a graph structure to represent the present mobile cellular network and establish the route-selection strategies. We define the "Optimal Route Assignment" for a newly constructed base station, which refers to the route assignment that causes least impact on disconnection area when any circuit in the network is broken. We also propose to use A* algorithm for optimal route assignment. However, the computation for the optimal route is time consuming. Measures such as computation time and least hops are considered in designing other strategies for route assignment. These strategies are parametric and we carried out experiments by adjusting and controlling parameters using real routing data. The experimental results demonstrate that there is no single winner among the proposed strategies. We identify a number of best strategies for different operating regions.
1234

Comparison Of Decoding Algorithms For Low-density Parity-check Codes

Kolayli, Mert 01 September 2006 (has links) (PDF)
Low-density parity-check (LDPC) codes are a subclass of linear block codes. These codes have parity-check matrices in which the ratio of the non-zero elements to all elements is low. This property is exploited in defining low complexity decoding algorithms. Low-density parity-check codes have good distance properties and error correction capability near Shannon limits. In this thesis, the sum-product and the bit-flip decoding algorithms for low-density parity-check codes are implemented on Intel Pentium M 1,86 GHz processor using the software called MATLAB. Simulations for the two decoding algorithms are made over additive white gaussian noise (AWGN) channel changing the code parameters like the information rate, the blocklength of the code and the column weight of the parity-check matrix. Performance comparison of the two decoding algorithms are made according to these simulation results. As expected, the sum-product algorithm, which is based on soft-decision decoding, outperforms the bit-flip algorithm, which depends on hard-decision decoding. Our simulations show that the performance of LDPC codes improves with increasing blocklength and number of iterations for both decoding algorithms. Since the sum-product algorithm has lower error-floor characteristics, increasing the number of iterations is more effective for the sum-product decoder compared to the bit-flip decoder. By having better BER performance for lower information rates, the bit-flip algorithm performs according to the expectations / however, the performance of the sum-product decoder deteriorates for information rates below 0.5 instead of improving. By irregular construction of LDPC codes, a performance improvement is observed especially for low SNR values.
1235

Development Of A Multigrid Accelerated Euler Solver On Adaptively Refined Two- And Three-dimensional Cartesian Grids

Cakmak, Mehtap 01 July 2009 (has links) (PDF)
Cartesian grids offer a valuable option to simulate aerodynamic flows around complex geometries such as multi-element airfoils, aircrafts, and rockets. Therefore, an adaptively-refined Cartesian grid generator and Euler solver are developed. For the mesh generation part of the algorithm, dynamic data structures are used to determine connectivity information between cells and uniform mesh is created in the domain. Marching squares and cubes algorithms are used to form interfaces of cut and split cells. Geometry-based cell adaptation is applied in the mesh generation. After obtaining appropriate mesh around input geometry, the solution is obtained using either flux vector splitting method or Roe&rsquo / s approximate Riemann solver with cell-centered approach. Least squares reconstruction of flow variables within the cell is used to determine high gradient regions of flow. Solution based adaptation method is then applied to current mesh in order to refine these regions and also coarsened regions where unnecessary small cells exist. Multistage time stepping is used with local time steps to increase the convergence rate. Also FAS multigrid technique is used in order to increase the convergence rate. It is obvious that implementation of geometry and solution based adaptations are easier for Cartesian meshes than other types of meshes. Besides, presented numerical results show the accuracy and efficiency of the algorithm by especially using geometry and solution based adaptation. Finally, Euler solutions of Cartesian grids around airfoils, projectiles and wings are compared with the experimental and numerical data available in the literature and accuracy and efficiency of the solver are verified.
1236

Hierarchical Data Structures for Pattern Recognition

Choudhury, Sabyasachy 05 1900 (has links)
Pattern recognition is an important area with potential applications in computer vision, Speech understanding, knowledge engineering, bio-medical data classification, earth sciences, life sciences, economics, psychology, linguistics, etc. Clustering is an unsupervised classification process corning under the area of pattern recognition. There are two types of clustering approaches: 1) Non-hierarchical methods 2) Hierarchical methods. Non-hierarchical algorithms are iterative in nature and. perform well in the context of isotropic clusters. Time-complexity of these algorithms is order of (0 (n) ) and above, Hierarchical agglomerative algorithms, on the other hand, are effective when clusters are non-isotropic. The single linkage method of hierarchical category produces a dendrogram which corresponds to the minimal spanning tree, conventional approaches are time consuming requiring O (n2 ) computational time. In this thesis we propose an intelligent partitioning scheme for generating the minimal spanning tree in the co-ordinate space. This is computationally elegant as it avoids the computation of similarity between many pairs of samples me minimal spanning tree generated can be used to produce C disjoint clusters by breaking the (C-1) longest edges in the tree. A systolic architecture has been proposed to increase the speed of the algorithm further. Simulation study has been conducted and the corresponding results are reported. The simulation package has been developed on DEC-1090 in Pascal. It is observed based on the simulation study that the parallel implementation reduces the time enormously. The number of processors required for the parallel implementation is a constant making the approach more attractive. Texture analysis and synthesis has been extensively studied in the context of computer vision, Two important approaches which have been studied extensively by researchers earlier are statistical and structural approaches, Texture is understood to be a periodic pattern with primitive sub patterns repeating in a particular fashion. This has been used to characterize texture with the help of the hierarchical data structure, tree. It is convenient to use a tree data structure as, along with the operations like merging, splitting, deleting a node, adding a node, etc, .it would be useful to handle a periodic pattern. Various functions like angular second moment, correlation etc, which are used to characterize texture have been translated into the new language of hierarchical data structure.
1237

Integrating A New Cluster Assignment And Scheduling Algorithm Into An Experimental Retargetable Code Generation Framework

Vasanta Lakshmi, Kommineni 05 1900 (has links)
This thesis presents a new unified algorithm for cluster assignment and acyclic region scheduling in a partitioned architecture, and preliminary results on its integration into an experimental retargetable code generation framework. The object of this work is twofold. Firstly, to validate for the first time, and evaluate the framework which is almost automatic, so as to gain insights into possibilities for improvement. This was done by using as a baseline for comparison, highly optimized code generated by the handcrafted compiler of Texas Instruments, the TI Code Composer Studio V2. The second objective is to compare the integrated scheduling algorithm with another well known algorithm which performs scheduling and cluster allocation in the same phase, the Unified Assign and Schedule (UAS) algorithm. The computational complexity of the two algorithms is comparable. The components of the framework experimented with here are (a) a tree transformer generator, which takes as input, a description of the instruction set of the target architecture in the form of a regular tree grammar augmented with actions and attributes, and outputs a data dependency directed acyclic graph, (b) the well known public domain IMPACT front end for C, (c)a microarchitecture description module which uses a modification of the HMDES architecture description language of the TRIMARAN project, to include cluster information, and (d) a combined cluster allocator and acyclic region scheduler and a register allocator designed and implemented by us. Experiments have been carried out on creating the proper interfaces for all the modules to work together, and the targeting of the tool to the Texas Instruments TMS320c62x architecture to establish the feasibility of this approach. We present the results of our implementation on a set of benchmarks and some sorting programs and compare them with those obtained from the state-of-the-art TI compiler. The performance without software pipelining shows that our executables take on the average 1.4 times the execution time as that of those generated by the TI compiler. The integrated scheduling algorithm proposed in this thesis performs at least as well as the UAS algorithm and sometimes better by as much as 9 % in terms of the parallelism obtained.
1238

Improved Solution Techniques For Trajectory Optimization With Application To A RLV-Demonstrator Mission

Arora, Rajesh Kumar 07 1900 (has links)
Solutions to trajectory optimization problems are carried out by the direct and indirect methods. Under broad heading of these methods, numerous algorithms such as collocation, direct, indirect and multiple shooting methods have been developed and reported in the literature. Each of these algorithms has certain advantages and limitations. For example, direct shooting technique is not suitable when the number of nonlinear programming variables is large. Indirect shooting method requires analytical derivatives of the control and co-states function and a poorly guessed initial condition can result in numerical unstable values of the adjoint variable. Multiple shooting techniques can alleviate some of these difficulties by breaking down the trajectory into several segments that help in reducing the non-linearity effects of early control on later parts of the trajectory. However, multiple shooting methods then have to handle more number of variables and constraints to satisfy the defects at the segment joints. The sie of the nonlinear programming problem in the collocation method is also large and proper locations of grid points are necessary to satisfy all the path constraints. Stochastic methods such as Genetic algorithms, on the other hand, also require large number of function evaluations before convergence. To overcome some of the limitations of the conventional methods, improved solution techniques are developed. Three improved methods are proposed for the solution of trajectory optimization problems. They are • a genetic algorithm employing dominance and diploidy concept. • a collocation method using chebyshev polynomials , and • a hybrid method that combines collocation and direct shooting technique A conventional binary-coded genetic algorithm uses a haploid chromosome, where a single string contains all the variable information in the coded from. A diploid, as the name suggests, uses pair of chromosomes to store the same characteristic feature. The diploid genetic algorithm uses a dominant map for decoding genotype into a stable, consistent phenotype. In dominance, one allele takes precedence over another. Diploidy and dominance helps in retaining the previous best solution discovered and shields them from harmful selection in a changing environment. Hence, diploid and dominance affect a king of long-term memory in the genetic algorithm. They allow alternate solutions to co-exist. One solution is expressed and the other is held in abeyance. In the improved diploid genetic algorithm, dominant and recessive genes are defined based on the fitness evaluation of each string. The genotype of fittest string is declared as the dominant map. The dominant map is dynamic in nature as it is replaced with a better individual in future generations. The concept of diploidy and dominance in the improved method mimics closer to the principles used in human genetics as compared to any such algorithms reported in the literature. It is observed that the improved diploid genetic algorithm is able to locate the optima for a given trajectory optimization problem with 10% lower computational time as compared to the haploid genetic algorithm. A parameter optimization problem arising from an optimal control problem where states and control are approximated by piecewise Chebyshev polynomials is well known. These polynomials are more accurate than the interpolating segments involving equal spaced data. In the collocation method involving Chebyshev polynomials, derivatives of two neighboring polynomials are matched with the dynamics at the nodal points. This leads to a large number of equality constraints in the optimization problem. In the improved method, derivative of the polynomial is also matched with the dynamics at the center of segments. Though is appears the problem size is merely increased, the additional computations improve the accuracy of the polynomial for a larger segment. The implicit integration step size is enhanced and overall size of the problem is brought down to one-fourth of the problem size defined with a conventional collocation method using Chebyshev polynomials. Hybrid method uses both collocation and direct shooting techniques. Advantages of both the methods are combined to give more synergy. Collocation method is used in the starting phase of the hybrid method. The disadvantage of standalone collocation method is that tuning of grid points is required to satisfy the path constraints. Nevertheless, collocation method does give a good guess required for the terminal phase of the hybrid method, which uses a direct shooting approach. Results show nearly 30% reduction in computation time for the hybrid approach as compared to a method in which direct shooting alone is used, for the same initial guess of control. The solutions obtained from the three improved methods are compared with an indirect method. The indirect method requires derivations of the control and adjoint equations, which are difficult and problem specific. Due to sensitivity of the costate variables, it is often difficult to find a solution through the indirect method. Nevertheless, these methods do provide an accurate result, which defines a benchmark for comparing the solutions obtained through the improved methods. Trajectory design and optimization of a RLV(Reusable Launch Vehicle) Demonstrator mission is considered as a test problem for evaluating the performance of the improved methods. The optimization problem is difficult than a conventional launch vehicle trajectory optimization problem because of the following two reasons. • aerodynamic lift forces in the RLV add one more dimension to the already complex launch vehicle optimization problem. • as RLV performs a sub orbital flight, the ascent phase trajectory influences the re-entry trajectory. Both the ascent and re-entry optimization problem of the RLV mission is addressed. It is observed that the hybrid method gives accurate results with least computational effort, as compared with other improved techniques for the trajectory optimization problem of RLV during its ascent flight. Hybrid method is then successfully used during the re-entry phase and in designing the feasible optimal trajectories under the dispersion conditions. Analytical solutions obtained from literature are used to compare the optimized trajectory during the re-entry phase. Trajectory optimization studies are also carried out for the off-nominal performances. Being a thrusting phase, the ascent trajectory is subjected to significant deviations, mainly arising out of solid booster performance dispersions. The performance index during rhe ascent phase is modified in a novel way for handling dispersions. It minimizes the state errors in a least square sense, defined at the burnout conditions ensure possibilities of safe re-entry trajectories. The optimal trajectories under dispersion conditions serve as a benchmark for validating the closed-loop guidance algorithm that is developed for the ascent phase flight. Finally, an on-line trajectory command-reshaping algorithm is developed which meets the flight objectives under the dispersion conditions. The guidance algorithm uses a pre-computed trajectory database along with some real-time measured parameters in generating the optimal steering profiles. The flight objectives are met under the dispersion conditions and the guidance generated steering profiles matches closely with the optimal trajectories.
1239

Large Scale Implementation Of The Block Lanczos Algorithm

Srikanth, Cherukupally 03 1900 (has links)
Large sparse matrices arise in many applications, especially in the major problems of Cryptography of factoring integers and computing discrete logarithms. We focus attention on such matrices called sieve matrices generated after the sieving stage of the algorithms for integer factoring. We need to solve large sparse system of equations Bx = 0, with sieve matrices B arising in this context. The traditional Gaussian elimination, with a cubic run time, is not efficient for handling such matrices. Better algorithms for such input matrices are the quadratic runtime algorithms based on Block Lanczos(BL) or Wiedemann techniques. Of these two, BL is even better for large integer factoring algorithms. We carry out an efficient implementation of the Block Lanczos algorithm for finding the vectors in the null space of the the sieve matrix. We report our test results using our implementation for matrices of sizes up to 106. We plan to use this implementation in our ongoing projects on factoring the large RSA challenge integers of sizes 640 bits(called RSA-640) and beyond. So it is useful to exploit possible parallelism. We propose a scheme for parallelizing certain steps of the Block Lanczos method, taking advantage of structural properties of the sieve matrix. The sizes of matrices arising in integer factoring context are quite large. Hence we also discuss some techniques that are used to reduce the size of the sieve matrix. We also consider the last stage of the NFS Algorithm for finding square roots of large algebraic numbers and outline a sketch of our algorithm.
1240

Additive Latent Variable (ALV) Modeling: Assessing Variation in Intervention Impact in Randomized Field Trials

Toyinbo, Peter Ayo 23 October 2009 (has links)
In order to personalize or tailor treatments to maximize impact among different subgroups, there is need to model not only the main effects of intervention but also the variation in intervention impact by baseline individual level risk characteristics. To this end a suitable statistical model will allow researchers to answer a major research question: who benefits or is harmed by this intervention program? Commonly in social and psychological research, the baseline risk may be unobservable and have to be estimated from observed indicators that are measured with errors; also it may have nonlinear relationship with the outcome. Most of the existing nonlinear structural equation models (SEM’s) developed to address such problems employ polynomial or fully parametric nonlinear functions to define the structural equations. These methods are limited because they require functional forms to be specified beforehand and even if the models include higher order polynomials there may be problems when the focus of interest relates to the function over its whole domain. To develop a more flexible statistical modeling technique for assessing complex relationships between a proximal/distal outcome and 1) baseline characteristics measured with errors, and 2) baseline-treatment interaction; such that the shapes of these relationships are data driven and there is no need for the shapes to be determined a priori. In the ALV model structure the nonlinear components of the regression equations are represented as generalized additive model (GAM), or generalized additive mixed-effects model (GAMM). Replication study results show that the ALV model estimates of underlying relationships in the data are sufficiently close to the true pattern. The ALV modeling technique allows researchers to assess how an intervention affects individuals differently as a function of baseline risk that is itself measured with error, and uncover complex relationships in the data that might otherwise be missed. Although the ALV approach is computationally intensive, it relieves its users from the need to decide functional forms before the model is run. It can be extended to examine complex nonlinearity between growth factors and distal outcomes in a longitudinal study.

Page generated in 0.0469 seconds