• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 40
  • 30
  • 18
  • 18
  • 11
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 4
  • 2
  • 2
  • Tagged with
  • 420
  • 64
  • 60
  • 56
  • 54
  • 52
  • 51
  • 48
  • 48
  • 45
  • 40
  • 39
  • 37
  • 37
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

E-recursively enumerable degrees

Griffor, Edward R. January 1980 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1980. / Vita. / Bibliography: leaves 161-163. / by Edward R. Griffor. / Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1980.
22

Recursion on inadmissible ordinals

Friedman, Sy David January 1976 (has links)
Thesis. 1976. Ph.D.--Massachusetts Institute of Technology. Dept. of Mathematics. / Microfiche copy available in Archives and Science. / Vita. / Bibliography: leaves 123-125. / by Sy D. Friedman. / Ph.D.
23

Improving the Karatsuba-Ofman multiplication algorithm for special applications

Erdem, Serdar S. 08 November 2001 (has links)
In this thesis, we study the Karatsuba-Ofman Algorithm (KOA), which is a recursive multi-precision multiplication method, and improve it for certain special applications. This thesis is in two parts. In the first part, we derive an efficient algorithm from the KOA to multiply the operands having a precision of 2[superscript m] computer words for some integer m. This new algorithm is less complex and three times less recursive than the KOA. However, the order of the complexity is the same as the KOA. In the second part of the thesis, we introduce a novel method to perform fast multiplication in GF(2[superscript m]), using the KOA. This method is intended for software implementations and has two phases. In the first phase, we treat the field elements in GF(2[superscript m]) as polynomials over GF(2) and multiply them by a technique based on the KOA, which we call the LKOA (lean KOA). In the second phase, we reduce the product with an irreducible trinomial or pentanomial. The LKOA is similar to the KOA. However, it stops the recursions early and switches to some nonrecursive algorithms which can efficiently multiply small polynomials over GF(2). We derive these nonrecursive algorithms from the KOA, by removing its recursions. Additionally, we optimize them, exploiting the arithmetic of the polynomials over GF(2). As a result, we obtain a decrease in complexity, as well as a reduction in the recursion overhead. / Graduation date: 2002
24

Analysis for an effective operation of a general automaton recursive methods applied to a graph model.

Frederick, Terry J. January 1900 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1969. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliography.
25

Computational complexity of real functions and polynomial time approximations /

Ko, Ker-I January 1979 (has links)
No description available.
26

Algorithms for recursive Frisch scheme identification and errors-in-variables filtering

Linden, J. G. January 2008 (has links)
This thesis deals with the development of algorithms for recursive estimation within the errors-in-variables framework. Within this context attention is focused on two major threads of research: Recursive system identification based on the Frisch scheme and the extension and application of errors-in-variables Kalman filtering techniques. In the first thread, recursive algorithms for the approximate update of the estimates obtained via the Frisch scheme, which makes use of the Yule-Walker model selection criterion, are developed for the case of white measurement noise. Gradient-based techniques are utilised to update the Frisch scheme equations, which involve the minimisation of the model selection criterion as well as the solution of an eigenvalue problem, in a recursive manner. The computational complexity of the resulting algorithms is critically analysed and, by introducing additional approximations, fast recursive Frisch scheme algorithms are developed, which reduce the computational complexity from cubic to quadratic order. In addition, it is investigated how the singularity condition within the Frisch scheme is affected when the estimates are computed recursively. Whilst this first group of recursive Frisch scheme algorithms is developed directly from the offline Frisch scheme equations, it is also possible to interpret the Frisch scheme within an extended bias compensating least squares framework. Consequently, the development of recursive algorithms, which update the estimate obtained from the extended bias compensated least squares technique, is considered. These algorithms make use of the bilinear parametrisation principle or, alternatively, the variable projection method. Finally, two recursive Frisch scheme algorithms are developed for the case of coloured output noise. The second thread, which considers the theory of errors-in-variables filtering for linear systems, extends the approach to deal with a class of bilinear systems, a frequently used subset of nonlinear systems. The application of errors-in-variables filtering for the purpose of system identification is also considered. This leads to the development of a prediction error method based on symmetric innovations, which resembles the joint output method. Both the offline and online implementation of this novel identification technique are investigated.
27

Infinite impulse response notch filter

Jangsri, Venus 12 1900 (has links)
Approved for public release; distribution is unlimited / A pipeline technique by Loomis and Sinha has been applied to the design of recursive digital filters. Recursive digital filters operating at hitherto impossibly high rates can be designed by this technique. An alternate technique by R. Gnanasekaran allows high speed implementation using the state-space structure directly. High throughput is also achieved by use of pipelined multiply-add modules. The actual hardware complexity will depend upon the number of pipeline stages. These techniques are used for the design of the I IR notch filter and finally, a comparison of the performance and complexity of these two techniques is presented. / http://archive.org/details/infiniteimpulser00jang / Lieutenant, Royal Thai Navy
28

Measuring the Stability of Results from Supervised Statistical Learning

Philipp, Michel, Rusch, Thomas, Hornik, Kurt, Strobl, Carolin 17 January 2017 (has links) (PDF)
Stability is a major requirement to draw reliable conclusions when interpreting results from supervised statistical learning. In this paper, we present a general framework for assessing and comparing the stability of results, that can be used in real-world statistical learning applications or in benchmark studies. We use the framework to show that stability is a property of both the algorithm and the data-generating process. In particular, we demonstrate that unstable algorithms (such as recursive partitioning) can produce stable results when the functional form of the relationship between the predictors and the response matches the algorithm. Typical uses of the framework in practice would be to compare the stability of results generated by different candidate algorithms for a data set at hand or to assess the stability of algorithms in a benchmark study. Code to perform the stability analyses is provided in the form of an R-package. / Series: Research Report Series / Department of Statistics and Mathematics
29

Evaluating Model-based Trees in Practice

Zeileis, Achim, Hothorn, Torsten, Hornik, Kurt January 2006 (has links) (PDF)
A recently suggested algorithm for recursive partitioning of statistical models (Zeileis, Hothorn and Hornik, 2005), such as models estimated by maximum likelihood or least squares, is evaluated in practice. The general algorithm is applied to linear regression, logisitic regression and survival regression and applied to economical and medical regression problems. Furthermore, its performance with respect to prediction quality and model complexity is compared in a benchmark study with a large collection of other tree-based algorithms showing that the algorithm yields interpretable trees, competitive with previously suggested approaches. / Series: Research Report Series / Department of Statistics and Mathematics
30

Conditioning graphs: practical structures for inference in bayesian networks

Grant, Kevin John 16 January 2007
Probability is a useful tool for reasoning when faced with uncertainty. Bayesian networks offer a compact representation of a probabilistic problem, exploiting independence amongst variables that allows a factorization of the joint probability into much smaller local probability distributions.<p>The standard approach to probabilistic inference in Bayesian networks is to compile the graph into a join­tree, and perform computation over this secondary structure. While join­trees are among the most time­efficient methods of inference in Bayesian networks, they are not always appropriate for certain applications. The memory requirements of join­tree can be prohibitively large. The algorithms for computing over join­trees are large and involved, making them difficult to port to other systems or be understood by general programmers without Bayesian network expertise. <p>This thesis proposes a different method for probabilistic inference in Bayesian networks. We present a data structure called a conditioning graph, which is a run­time representation of Bayesian network inference. The structure mitigates many of the problems of join­tree inference. For example, conditioning graphs require much less space to store and compute over. The algorithm for calculating probabilities from a conditioning graph is small and basic, making it portable to virtually any architecture. And the details of Bayesian network inference are compiled away during the construction of the conditioning graph, leaving an intuitive structure that is easy to understand and implement without any Bayesian network expertise. <p>In addition to the conditioning graph architecture, we present several improvements to the model, that maintain its small and simplistic style while reducing the runtime required for computing over it. We present two heuristics for choosing variable orderings that result in shallower elimination trees, reducing the overall complexity of computing over conditioning graphs. We also demonstrate several compile and runtime extensions to the algorithm, that can produce substantial speedup to the algorithm while adding a small space constant to the implementation. We also show how to cache intermediate values in conditioning graphs during probabilistic computation, that allows conditioning graphs to perform at the same speed as standard methods by avoiding duplicate computation, at the price of more memory. The methods presented also conform to the basic style of the original algorithm. We demonstrate a novel technique for reducing the amount of required memory for caching. <p>We demonstrate empirically the compactness, portability, and ease of use of conditioning graphs. We also show that the optimizations of conditioning graphs allow competitive behaviour with standard methods in many circumstances, while still preserving its small and simple style. Finally, we show that the memory required under caching can be quite modest, meaning that conditioning graphs can be competitive with standard methods in terms of time, using a fraction of the memory.

Page generated in 0.0402 seconds