31 |
System identification in the presence of nonlinear distortions using multisine signalsSolomou, Michael January 2003 (has links)
No description available.
|
32 |
Analysis and identification of nonlinear system using parametric models of Volterra operatorsFeijoo, Juan Alejandro Vazquez January 2002 (has links)
No description available.
|
33 |
Some theoretical and computational aspects of approximation theoryCarpenter, A. J. January 1988 (has links)
No description available.
|
34 |
Aspects of robustness and approximation in hierarchical modelsSharples, L. D. January 1988 (has links)
No description available.
|
35 |
Development of non-local density functional methodsJochym, Dominik Bogdan January 2008 (has links)
Density functional theory (DFT) is a popular approach to solving the many-electron Schrödinger equation, in order to investigate the properties of matter from first principles. While DFT can give the exact ground state electronic density of a system, in practice, an approximation is required for the many-body effects contained in the exchange-correlation functional. The accuracy of calculations performed using DFT is strongly related to the choice of approximation. In this thesis we will investigate and build upon a fully non-local approach to modeling exchange-correlation in the form of the weighted density approximation (WDA). Central to the WDA is the model function chosen for the coupling-constant averaged pair-correlation function (PCF). We show that a model PCF can be selected from a set to give excellent bulk properties for a particular system. However, this model is not necessarily transferable to other systems and there is no method of selecting an appropriate model from this set a priori. We suggest that the model PCF can be improved systematically by satisfying known physical constraints. One such constraint is the Kimball cusp condition, which we include in our model and implement. We demonstrate that surfaces are systems that require a non-local treatment of exchange-correlation by applying the WDA to metal surfaces and investigate the dissociative adsorption of H2 on the Cu(100) surface. A new framework for a model PCF with spin resolution is developed, providing a route for more physical constraints to be satisfied within a weighted spin density approximation (WSDA). A simple model is suggested and implemented and comparisons are made to the coupling-constant averaged PCF in the homogeneous electron gas. We then apply a selection of our new models to a number of materials and show that our model for the WSDA gives improved band gaps over the local density approximation. Application of the WSDA to spin polarised materials reveals shortcomings in our simple model. We then suggest further refinements to our implementation of the WSDA. It is expected that the inclusion of additional physical constraints will systematically improve results given in a weighted-density based approximation to exchange-correlation.
|
36 |
Constant and power-of-2 segmentation algorithms for a high speed numerical function generatorValenzuela, Zaldy M. 06 1900 (has links)
The realization of high-speed numeric computation is a sought-after commodity for real world applications, including high-speed scientific computation, digital signal processing, and embedded computers. An example of this is the generation of elementary functions, such as sin( ) x , x e and log( ) x . Sasao, Butler and Reidel [Ref. 1] developed a high speed numeric function generator using a look-up table (LUT) cascade. Their method used a piecewise linear segmentation algorithm to generate the functions [Ref. 1]. In this thesis, two alternative segmentation algorithms are proposed and compared to the results of Sasao, Butler and Reidel [Ref.1]. The first algorithm is the Constant Approximation. This algorithm uses lines of slope zero to approximate a curve. The second algorithm is the power-of-2-approximation. This method uses 2i x to approximate a curve. The constant approximation eliminates the need for a multiplier and adder, while the power-of-2-approximations eliminates the need for multiplier, thus improving the computation speed. Tradeoffs between the three methods are examined. Specifically, the implementation of the piecewise linear algorithm requires the most amount of hardware and is slower than the other two. The advantage that it has is that it yields the least amount of segments to generate a function. The constant approximation requires the most amount of hardware to realize a function, but is the fastest implementation. The power-of-2 approximation is an intermediate choice that balances speed and hardware requirements.
|
37 |
A Primal-Dual Approximation Algorithm for the Concurrent Flow ProblemNahabedian, Aaron Joseph 29 April 2010 (has links)
The multicommodity flow problem involves shipping multiple commodities simultaneously through a network so that the total flow over each edge does not exceed the capacity of that edge. The concurrent flow problem also associates with each commodity a demand, and involves finding the maximum fraction z, such that z of each commodity's demand can be feasibly shipped through the network. This problem has applications in message routing, transportation, and scheduling problems. It can be formulated as a linear programming problem, and the best known solutions take advantage of decomposition techniques for linear programming. Often, quickly finding an approximate solution is more important than finding an optimal solution. A solution is epsilon-optimal if it lies within a factor of (1+epsilon) of the optimal solution. We present a combinatorial approximation algorithm for the concurrent flow problem. This algorithm consists of finding an initial flow, and gradually rerouting this flow from more to less congested paths, until an epsilon-optimal flow is achieved. This algorithm theoretically runs much faster than linear programming based algorithms.
|
38 |
Chebyshev approximation by piecewise continuous functionsChand, Donald Rajinder January 1965 (has links)
Thesis (Ph.D.)--Boston University / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / This paper discusses the following problem of approximation theory: a continuous function f(x) is to approximated over an interval alpha </= x </= B by N not necessarily connected polynomials of a given degree n, in such a way that the maximum error magnitude is a minimum. Each polynomial is associated with one subinterval [uj, uj+1]. If the end points Ui were specified in advance, the problem would reduce to N independent problems of the same type, namely, the fitting of a single polynomial in the Chebyshev sense. Here, however, the end points are taken as unknowns and the principal problem is to determine them.
The paper presents a proof of the existence of the best approximation and examples showing the solution, in general, is not unique. However, in the special case of approximation of convex functions by line segments, the solution is shown to be unique. Further in this case a simple characterization of the solution is obtained and it is shown that the problem may be reduced analytically to a stage where in order to determine Ui computationally, it is only necessary to solve a system of equations rather than minimize a function. Results obtained by a dynamic programming method using a digital computer (IBM 7090) are used for illustration. / 2031-01-01
|
39 |
Méthodes heuristiques pour les problèmes de type knapsack / Heuristic methods for solving knapsack type problemsAl-Douri, Thekra 02 February 2018 (has links)
Les travaux de recherche de cette thèse s'articulent autour de la résolution du problème du sac à dos en min-max avec de multiples scénarios (en anglais, max-min knapsack problem with multi-scenarios). Cette thèse propose trois approches, plutôt complémentaires, en s'appuyant principalement sur l'aspect perturbation des solutions puis la reconstruction. En partant de ce principe, trois algorithmes approchés ont été étudiés, en partant d'une approche mono-solution vers des approches à base de population. Dans une première partie, un algorithme réactif a été proposé ; il s'appuie sur deux phases imbriquées dans une recherche itérative : la phase de restauration / exploration et la phase de perturbation. La première phase part d'une solution réalisable et tente de l'améliorer en utilisant une stratégie d'exploration spécifique. Cette dernière est basée sur une série d'échanges entre les éléments appartenant ou pas à la solution courante. La deuxième phase commence par construire une solution partielle, en supprimant certains éléments de la solution courante, alors qu'une stratégie de ré-optimisation tente de sélectionner de nouveaux éléments et de les inclure dans une solution dégradée. La stratégie de destruction tente également de diversifier le processus de recherche en dégradant la qualité des solutions dans le but d'éviter des stagnations locales. Dans une deuxième partie, une méthode à base de population a été proposée. Elle s'appuie sur trois phases. Une phase de construction de la population de départ par application d'un algorithme glouton aléatoire, une deuxième phase qui combine une série de solutions deux-à -deux, par l'utilisation de l'opérateur d'intersection et, une troisième phase qui agit sur les successeurs afin d'augmenter la qualité des solutions induites. Les deux dernières phases sont répétées jusqu'à la stabilité de la population. Dans une troisième partie, le problème est résolu en combinant le GRASP (Greedy Randomized Adaptive Search Procedure) et le Path-relinking. Cette approche combine deux stratégies: une stratégie de construction et une autre d'amélioration. D'une part, la première stratégie produit une solution (de départ) réalisable en appliquant le GRASP. D'autre part, chaque solution courante (de départ) est améliorée en appliquant une stratégie basée sur le path-relinking : partir d’un couple de solutions « départ-arrivée », puis tenter de reconstruire le lien entre ces deux solutions en espérant rencontrer des solutions de meilleures qualités sur le chemin. Ce processus est répété sur une série de solutions / The aim of this thesis is to propose approximate algorithms for tackling the max-min Multi-Scenarios Knapsack Problem (MSKP). Three methods have been proposed (which can be considered as complementary), where each of them is based on the perturbation aspect of the solutions and their re-buildings. The proposed methods are declined in three parts. In the first part, we propose to solve the MSKP by using a hybrid reactive search algorithm that uses two main features: (i) the restoring/exploring phase and (ii) the perturbation phase. The first phase yields a feasible solution and tries to improve it by using an intensification search. The second phase can be viewed as a diversification search in which a series of subspaces are investigated in order to make a quick convergence to a global optimum. Finally, the proposed method is evaluated on a set of benchmark instances taken from the literature, whereby its obtained results are compared to those reached by recent methods available in the literature. The results show that the method is competitive and it is able to provide better solutions. The second part discusses a population-based method which combines three complementary stages: (i) the building stage, (ii) the combination stage and (iii) the two-stage rebuild stage. First, the building stage serves to provide a starting feasible solution by using a greedy procedure; each item is randomly chosen for reaching a starting population of solutions. Second, the combination stage tries to provide each new solution by combining subsets of (starting) solutions. Third, the rebuild stage tries to make an intensification in order to improve the solutions at hand. The proposed method is evaluated on a set of benchmark instances taken from the literature, where its obtained results are compared to those reached by the best algorithms available in the literature. The results show that the proposed method provides better solutions than those already published. In the third part, both greedy randomized adaptive search procedure and path-relinking are combined for tackling the MSKP. The proposed method iterates both building and improvement phases that are based upon an extended search process. The first phase yields a (starting) feasible solution for the problem by applying a greedy randomized search procedure. The second phase tries to enhance each current solution by applying the path-relinking based strategy. Finally, the proposed method is evaluated on a set of benchmark instances taken from the literature. The obtained results are compared to those reached by some best algorithms available in the literature. Encouraging results have been obtained
|
40 |
Complexity on some bin packing problems. / CUHK electronic theses & dissertations collectionJanuary 2000 (has links)
by Lau Siu Chung. / "April 2000." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (p. 97-102). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
Page generated in 0.1101 seconds