• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4030
  • 1194
  • 508
  • 267
  • 200
  • 119
  • 91
  • 45
  • 45
  • 45
  • 45
  • 45
  • 45
  • 43
  • 34
  • Tagged with
  • 8124
  • 2353
  • 1599
  • 1091
  • 1068
  • 1040
  • 1007
  • 957
  • 896
  • 896
  • 821
  • 770
  • 710
  • 519
  • 496
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Implementation and comparison of key-finding algorithms

Kim, Chang Young January 1976 (has links)
No description available.
2

Algorithms for presenting subgroups

Kipnis, Evan Jonathan. January 1978 (has links)
No description available.
3

A parellel algorithm for solving a complex function f(z) = 0

Cardelino, Carlos Antonio January 1985 (has links)
No description available.
4

Variable metric methods without exact one- dimensional searches.

Fitzpatrick, Gordon James January 1972 (has links)
The selection of updating formulas for the H matrix and the subproblem of one-dimensional search are considered for variable metric algorithms not using exact linear searches in the subproblem. It is argued that the convex class of updating formulas given by Fletcher is the logical choice for such algorithms. It is shown that direct linear searches are more efficient than linear searches using directional derivatives at each point, particularly for objective functions of many variables. Features of algorithms using the convex class of formulas without exact searches are discussed. It is proven that effects on these algorithms of scaling of the variables of the objective function are identical to effects of transforming the initial H matrix. It is predicted that regions of a function where the Hessian is non-positive definite may be detrimental to convergence of these algorithms. A function is given for which Fletcher's recent algorithm (which does not use any linear search for a minimum) fails to converge. Attempts were made to devise a test to detect non-convexity of a function so that an algorithm using no linear search for a minimum in convex regions and an alternative strategy in non-convex regions could be developed to overcome this problem. Algorithms incorporating a test for non-convexity were coded as well as Fletcher's algorithm, an algorithm using a weak quadratic direct minimizing search, and an algorithm using the weak cubic minimizing search as used in the Fletcher Powell method. Numerical experiments were performed on functions from the literature and functions developed by the author. Fletcher's algorithm was found to be inefficient on all functions in comparison to the weak quadratic search algorithm where the initial H matrix had small eigenvalues. Where Fletcher's algorithm was relatively efficient, the former search was in all cases competitive. The value of a direct over derivative linear search is demonstrated. The algorithms using a test for convexity were not effective, since the best was not generally"effective in detecting non-convexity. It is concluded that algorithms without a form of one-dimensional weak minimizing search are not suitable for use on general minimization problems, and that the weak quadratic direct search proposed is a more efficient and reliable alternative. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
5

Implementable algorithms for stochastic nonlinear programs with applications to portfolio selection and revision

Kallberg, Jarl Gunnar January 1979 (has links)
This dissertation has two main objectives: first, to develop efficient algorithms for the solution of one and two period constrained optimization problems, and second, to apply these methods to the solution of portfolio selection and revision problems. The algorithms developed are based upon the Frank-Wolfe method. A convergent algorithm is developed which modifies this approach to allow for sequences of approximations to the objective and to the gradient of the objective, as well as inexact linear searches. By utilizing varying degrees of accuracy (with increasing precision as the optimum is approached), the method will be computationally more tractable than fixed tolerance methods without sacrificing the convergence properties. This algorithm is then applied to a static portfolio selection problem. Here the investor has a wealth allotment to be allocated to a number of possible risky investments with the objective of maximizing the expected utility of terminal wealth. The investor's preferences are assumed to be given by a (von-Neumann-Morgenstern) utility function. For the empirical studies seven classes of utility functions and ten joint normally distributed assets are used. One question investigated is the degree to which the Arrow-Pratt risk aversion measure determines portfolio composition. The empirical results are augmented by a theorem showing (for normally distributed security returns) that the Rubinstein global risk aversion measure is sufficient to determine portfolio composition. The second part of this dissertation deals with two period problems. An algorithm, based on the method of Hogan for extremal value functions, is developed. The extensions (and subsequent advantages) are analogous to those developed for the one period problem. This method is used to solve a portfolio revision problem utilizing five joint normally distributed assets and proportional transaction costs. Empirically, it is shown that significant errors are generated by ignoring the revision aspect of the problem, even with serially uncorrelated returns. / Business, Sauder School of / Graduate
6

Minterm based search algorithms for two-level minimization of discrete functions

Whitney, Michael James 09 July 2018 (has links)
Techniques for the heuristic and exact two-level minimization of Boolean and multivalued functions are presented. The work is based on a previously existing algorithmic framework for two-level minimization known as directed search. This method is capable of selecting covering prime implicants without generating all of them. Directed search differs from most other minimization methods in that implicant cubes are generated from minterms, not from other cubes. Heretofore, the directed search algorithm and published variants have not been capable of minimizing PLA’s of “industrial” size. The algorithms in this Work significantly ameliorate this situation. In particular, original and efficient techniques are proposed for prime implicant generation, computation of dominance relations, elimination of redundant minterms, storage and retrieval of cubes and minterms, and, isolation and reduction of cycles. The algorithms are embodied in a working minimizer called MDSA. In the absence of cycles, MDSA provides provably optimum cube covers. Empirical comparison with other minimizers show the new algorithms to be very competitive, even superior. For mid-sized non-cyclic PLA’s, MDSA is nearly always faster, and usually faster for PLA’s containing cycles, than the best known heuristic competitor. The number of cubes found for cyclic PLA’s is also better (lower), on average. MDSA can also be set to provide provably minimum solutions for cyclic functions. In this case, MDSA again outperforms competitive minimizers in a similar mode of operation. Both heuristic and exact versions of MDSA are restricted to PLA’s with. 32 or fewer inputs, and 32 or fewer outputs. / Graduate
7

A study of a short term correlation algorithm

Vijayendra, Siravara January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
8

A Multivariate Adaptive Trimmed Likelihood Algorithm

Daniel.Schubert@csiro.au, Daniel Schubert January 2005 (has links)
The research reported in this thesis describes a new algorithm which can be used to robustify statistical estimates adaptively. The algorithm does not require any pre-specified cut-off value between inlying and outlying regions and there is no presumption of any cluster configuration. This new algorithm adapts to any particular sample and may advise the trimming of a certain proportion of data considered extraneous or may divulge the structure of a multi-modal data set. Its adaptive quality also allows for the confirmation that uni-modal, multivariate normal data sets are outlier free. It is also shown to behave independently of the type of outlier, for example, whether applied to a data set with a solitary observation located in some extreme region or to a data set composed of clusters of outlying data, this algorithm performs with a high probability of success.
9

Implementation and comparison of key-finding algorithms

Kim, Chang Young January 1976 (has links)
No description available.
10

Algorithms for presenting subgroups

Kipnis, Evan Jonathan. January 1978 (has links)
No description available.

Page generated in 0.062 seconds