• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5939
  • 1421
  • 871
  • 726
  • 722
  • 668
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Increasing the Computational Efficiency of Combinatoric Searches

Morgan, Wiley Spencer 01 September 2016 (has links)
A new algorithm for the enumeration of derivative superstructures of a crystal is presented. The algorithm will help increase the efficiency of computational material design methods such as cluster expansion by increasing the size and diversity of the types of systems that can be modeled. Modeling potential alloys requires the exploration of all possible configurations of atoms. Additionally, modeling the thermal properties of materials requires knowledge of the possible ways of displacing the atoms. One solution to finding all symmetrically unique configurations and displacements is to generate the complete list of possible configurations and remove those that are symmetrically equivalent. This approach, however, suffers from the combinatoric explosion that happens when the supercell size is large, when there are more than two atom types, or when atomic displacements are included in the system. The combinatoric explosion is a problem because the large number of possible arrangements makes finding the relatively small number of unique arrangements for these systems impractical. The algorithm presented here is an extension of an existing algorithm [Hart & Forcade (2008a), Hart & Forcade (2009a), Hart et al. (2012a) Hart, Nelson, & Forcade] to include the extra configurational degree of freedom from the inclusion of displacement directions. The algorithm makes use of another recently developed algorithm for the Pólya [Pólya & Read (1987), Pólya (1937), Rosenbrock et al.(2015) Rosenbrock, Morgan, Hart, Curtarolo, & Forcade] counting theorem to inform the user of the total number of unique arrangements before performing the enumeration and to ensure that the list of unique arrangements will fit in system memory. The algorithm also uses group theory to eliminate large classes of arrangements rather than eliminating arrangements one by one. The three major topics of this paper will be presented in this order, first the Pólya algorithm, second the new algorithm for eliminating duplicate structures, and third the algorithms extension to include displacement directions. With these tools, it is possible to avoid the combinatoric explosion and enumerate previously inaccessible systems, including those that contain displaced atoms.
182

Self-adjusting reinforcement learning

Der, Ralf, Herrmann, Michael 10 December 2018 (has links)
We present a variant of the Q-learning algorithm with automatic control of the exploration rate by a competition scheme. The theoretical approach is accompanied by systematic simulations of a chaos control task. Finally, we give interpretations of the algorithm in the context of computational ecology and neural networks.
183

Advanced Electromyogram Signal Processing with an Emphasis on Simplified, Near-Optimal Whitening

Wang, He 22 November 2019 (has links)
Estimates of the time-varying standard deviation of the surface EMG signal (EMGσ) are extensively used in the field of EMG-torque estimation. The use of a whitening filter can substantially improve the accuracy of EMGσ estimation by removing the signal correlation and increasing the statistical bandwidth. However, a subject-specific whitening filter which is calibrated to each subject, is quite complex and inconvenient. To solve this problem, we first calibrated a 60th-order “Universal” FIR whitening filter by using the ensemble mean of the inverse of the square root of the power spectral density (PSD) of the noise-free EMG signal. Pre-existing data from elbow contraction of 64 subjects, providing 512 recording trials were used. The test error on an EMG-torque task based on the “Universal” FIR whitening filter had a mean error of 4.80% maximum voluntary contraction (MVC) with a standard deviation of 2.03% MVC. Meanwhile the subject-specific whitening filter had performance of 4.84±1.98% MVC (both have a whitening band limit at 600 Hz). These two methods had no statistical difference. Furthermore, a 2nd-order IIR whitening filter was designed based on the magnitude response of the “Universal” FIR whitening filter, via the differential evolution algorithm. The performance of this IIR whitening filter was very similar to the FIR filter, with a performance of 4.81±2.12% MVC. A statistical test showed that these two methods had no significant difference either. Additionally, a complete theory of EMG in additive measured noise contraction modeling is described. Results show that subtracting the variance of whitened noise by computing the root difference of the square (RDS) is the correct way to remove noise from the EMG signal.
184

Computational Algebraic Geometry Applied to Invariant Theory

Shifler, Ryan M. 05 June 2013 (has links)
Commutative algebra finds its roots in invariant theory and the connection is drawn from a modern standpoint. The Hilbert Basis Theorem and the Nullstellenstatz were considered lemmas for classical invariant theory. The Groebner basis is a modern tool used and is implemented with the computer algebra system Mathematica. Number 14 of Hilbert\'s 23 problems is discussed along with the notion of invariance under a group action of GLn(C). Computational difficulties are also discussed in reference to Groebner bases and Invariant theory.The straitening law is presented from a Groebner basis point of view and is motivated as being a key piece of machinery in proving First Fundamental Theorem of Invariant Theory. / Master of Science
185

Truss topology optimization using an improved species-conserving genetic algorithm

Li, Jian-Ping 06 February 2014 (has links)
Yes / The aim of this article is to apply and improve the species-conserving genetic algorithm (SCGA) to search multiple solutions of truss topology optimization problems in a single run. A species is defined as a group of individuals with similar characteristics and is dominated by its species seed. The solutions of an optimization problem will be selected from the found species. To improve the accuracy of solutions, a species mutation technique is introduced to improve the fitness of the found species seeds and the combination of a neighbour mutation and a uniform mutation is applied to balance exploitation and exploration. A real vector is used to represent the corresponding cross-sectional areas and a member is thought to be existent if its area is bigger than a critical area. A finite element analysis model was developed to deal with more practical considerations in modelling, such as the existence of members, kinematic stability analysis, and computation of stresses and displacements. Cross-sectional areas and node connections are decision variables and optimized simultaneously to minimize the total weight of trusses. Numerical results demonstrate that some truss topology optimization examples have many global and local solutions, different topologies can be found using the proposed algorithm on a single run and some trusses have smaller weights than the solutions in the literature.
186

Efficient Computation of Regularities in Strings and Applications

Yusufu, Munina 08 1900 (has links)
<p> Regularities in strings model many phenomena and thus form the subject of extensive mathematical studies. Perhaps the most conspicuous regularities in strings are those that manifest themselves in the form of repeated subpatterns, that is, repeats, multirepeats, repetitions, runs and others. Regularities in the form of repeating substrings were the basis of one of the earliest and still widely used compression algorithms and remain central in more recent approaches. Repeats and repetitions of lengthy substrings in DNA and protein sequences are important markers in biological research. </p> <p> A large proportion of the available algorithms for computing regularities in strings depends on the prior computation of a suffix tree, or, more recently, of a suffix array. The design of new algorithms for computing regularities should emphasize conceptual simplicity, as well as both time and space efficiency.</p> <p> In this thesis, we investigate mathematical and algorithmical aspects of the computation of regularities in strings.</p> <p> The first part of the thesis is the development of space and time efficient nonextendible (NE) and supernonextendible (SNE) repeats algorithms RPT, shown to be more efficient than previous methods based on tests using different real data sets. In particular, we describe four variants of a new fast algorithm RPT1 that, based on suffix array construction, computes all the complete NE repeats in a given string x whose length (period) p ≥ pmin, where pmin ≥ 1 is a user-specified minimum. RPT1 uses 5n bytes of space directly, but requires the LCP array, whose construction needs 6n bytes. The variants RPT1-3 and RPT1-4 execute in O(n) time independent of alphabet size and are faster than the two other algorithms previously proposed for this problem. To provide a basis of comparison for RPT1, we also describe a straightforward algorithm RPT2 that computes complete NE repeats without any recourse to suffix arrays and whose total space requirement is only 5n bytes; however, this algorithm is slower than RPT1. Furthermore, we describe new fast algorithms RPT3 for computing all complete SNE repeats in x. Of these, RPT3-2 executes in O(n) time independent of alphabet size, thus asymptotically faster than the methods previously proposed. We conclude with a brief discussion of applications to bioinformatics and data compression.</p> <p> The second part of the thesis deals with the issue of finding the NE multirepeats in a set of N strings of average length n under various constraints. A multirepeat is a repeat that occurs at least m times (m ≥ 2) in each of at least q ≥ 1 strings in a given set of strings. We show that RPT1 can be extended to locate the multirepeats based on the investigation of the properties of the multirepeats and various strategies. We describe algorithms to find complete NE multirepeats, first with no restriction on "gap length" (that is, the gap between occurrences of the multirepeat), then with bounded gaps. For the first problem, we propose two algorithms with worst-case time complexities O(Nn+αlog2N) and O(Nn+α) that use 9Nn and 10Nn bytes of space, respectively, where α is the alphabet size. For the second problem, we describe an algorithm with worst-case time complexity O(RNn) that requires approximately 10Nn bytes, where R is the number of multirepeats output. We remark that if we set the min and max constraints on gaps equal to zero in this algorithm, we can find all repetitions (tandem repeats) in arbitrary subsets of a given set. We demonstrate that our algorithms are faster, more flexible and much more space efficient than algorithms recently proposed for this problem.</p> <p> Finally, the third part of the thesis provides a convenient framework for comparing the LZ factorization algorithms which are used in the computation of regularities in strings rather than in the traditional application to text compression. LZ factorization is the computational bottleneck in numerous string processing algorithms, especially in regularity studies, such as computing repetitions, runs, repeats with fixed gap, branching repeats, sequence alignment, local periods, and data compression. Since 1977, when Ziv and Lempel described a kind of string factorization useful for data compression, there has been a succession of algorithms proposed for computing "LZ factorization". In particular, there have been several recent algorithms proposed that extend the usefulness of LZ factorization, especially to the computation of runs in a string x. We choose these algorithms and analyze each algorithm separately, and remark on them by comparing some of their important aspects, for example, additional space required and handling mechanism. We also address their output format differences and some special features. We then provide a complete theoretical comparison of their time and space efficiency. We conduct intensive testing on both time and space performance and analyze the results carefully to draw conclusions in which situations these algorithms perform best. We believe that our investigation and analysis will be very useful for researchers in their choice of the proper LZ factorization algorithms to deal with the problems related to computation of the regularities in strings.</p> / Thesis / Doctor of Philosophy (PhD)
187

A New Algorithm for Stochastic Approximation

Griscik, Michael Paul 04 1900 (has links)
<p> A review of Stochastic Approximation and the major contributions to the area is made. A proof of convergence for the algorithm is developed. An optimization is attempted on the rate of convergence problem and the uniqueness problem is faced. An alternative proof of convergence is given as an independent check on the first one. Simulation results are present in light of the theory developed, and conclusions, limitations and recommendations are presented. </p> / Thesis / Master of Engineering (MEngr)
188

An Automatic Facility for Neutron Activation Analysis

MacDonald, Randy N. 06 1900 (has links)
<p> The development of a unified system for the automatic neutron activation analysis of large numbers of samples is described. The realization of the system entailed the automation of a gamma ray spectrometer system by means of a data and control link to a small computer (PDP-15) and the development of a reliable and fast data reduction algorithm suited to the small computer system. A detailed study of the algorithm and the errors associated with it has been included. </p> / Thesis / Master of Science (MSc)
189

Minimax System Modelling and Design

Srinivasan, Thandangorai 07 1900 (has links)
<p> Computer-aided system modelling and design for minimax objectives have been considered in detail. A new algorithm for minimax approximation, called the grazor search method, has been proposed and successfully used on a number of network design problems to test the reliability and efficiency of the method. A critical comparison of the method with existing algorithms has shown the grazor search algorithm to be reliable in most of the problems considered. Practical ideas have been presented to deal with constrained minimax optimization problems and to investigate a solution for minimax optimality. Two user-oriented computer programs incorporating these ideas have been included as part of the thesis. Lower-order modelling of a high-order system has been considered for minimax objectives, and the suggested ideas make it feasible to design automated models for a variety of transient and steady-state constraint specifications. </p> / Thesis / Doctor of Philosophy (PhD)
190

Assessing the performance of the simulated annealing algorithm using information theory

Fleischer, Mark Alan January 1994 (has links)
No description available.

Page generated in 0.0778 seconds