• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 312
  • 61
  • 42
  • 38
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Generalization Performance of Margin Multi-category Classifiers / Performances en généralisation des classifieurs multi-classes à marge

Musayeva, Khadija 23 September 2019 (has links)
Cette thèse porte sur la théorie de la discrimination multi-classe à marge. Elle a pour cadre la théorie statistique de l’apprentissage de Vapnik et Chervonenkis. L’objectif est d’établir des bornes de généralisation possédant une dépendances explicite au nombre C de catégories, à la taille m de l’échantillon et au paramètre de marge gamma, lorsque la fonction de perte considérée est une fonction de perte à marge possédant la propriété d’être lipschitzienne. La borne de généralisation repose sur la performance empirique du classifieur ainsi que sur sa "capacité". Dans cette thèse, les mesures de capacité considérées sont les suivantes : la complexité de Rademacher, les nombres de recouvrement et la dimension fat-shattering. Nos principales contributions sont obtenues sous l’hypothèse que les classes de fonctions composantes calculées par le classifieur ont des dimensions fat-shattering polynomiales et que les fonctions composantes sont indépendantes. Dans le contexte du schéma de calcul introduit par Mendelson, qui repose sur les relations entre les mesures de capacité évoquées plus haut, nous étudions l’impact que la décomposition au niveau de l’une de ces mesures de capacité a sur les dépendances (de la borne de généralisation) à C, m et gamma. En particulier, nous démontrons que la dépendance à C peut être considérablement améliorée par rapport à l’état de l’art si la décomposition est reportée au niveau du nombre de recouvrement ou de la dimension fat-shattering. Ce changement peut affecter négativement le taux de convergence (dépendance à m), ce qui souligne le fait que l’optimisation par rapport aux trois paramètres fondamentaux se traduit par la recherche d’un compromis. / This thesis deals with the theory of margin multi-category classification, and is based on the statistical learning theory founded by Vapnik and Chervonenkis. We are interested in deriving generalization bounds with explicit dependencies on the number C of categories, the sample size m and the margin parameter gamma, when the loss function considered is a Lipschitz continuous margin loss function. Generalization bounds rely on the empirical performance of the classifier as well as its "capacity". In this work, the following scale-sensitive capacity measures are considered: the Rademacher complexity, the covering numbers and the fat-shattering dimension. Our main contributions are obtained under the assumption that the classes of component functions implemented by a classifier have polynomially growing fat-shattering dimensions and that the component functions are independent. In the context of the pathway of Mendelson, which relates the Rademacher complexity to the covering numbers and the latter to the fat-shattering dimension, we study the impact that decomposing at the level of one of these capacity measures has on the dependencies on C, m and gamma. In particular, we demonstrate that the dependency on C can be substantially improved over the state of the art if the decomposition is postponed to the level of the metric entropy or the fat-shattering dimension. On the other hand, this impacts negatively the rate of convergence (dependency on m), an indication of the fact that optimizing the dependencies on the three basic parameters amounts to looking for a trade-off.
202

Theoretical and computational analysis of the two-stage capacitated plant location problem : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Decision Science at Massey University, Palmerston North, New Zealand

Wildbore, Bronwyn Louise Unknown Date (has links)
Mathematical models for plant location problems form an important class of integer and mixed-integer linear programs. The Two-Stage Capacitated Plant Location Problem (TSCPLP), the subject of this thesis, consists of a three level structure: in the first or upper-most level are the production plants, the second or central level contains the distribution depots, and the third level is the customers. The decisions to be made are: the subset of plants and depots to open; the assignment of customers to open depots, and therefore open plants; and the flow of product from the plants to the depots, to satisfy the customers' service or demand requirements at minimum cost. The formulation proposed for the TSCPLP is unique from previous models in the literature because customers can be served from multiple open depots (and plants) and the capacity of both the set of plants and the set of depots is restricted. Surrogate constraints are added to strengthen the bounds from relaxations of the problem. The need for more understanding of the strength of the bounds generated by this procedure for the TSCPLP is evident in the literature. Lagrangian relaxations are chosen based more on ease of solution than the knowledge that a strong bound will result. Lagrangian relaxation has been applied in heuristics and also inserted into branch-and-bound algorithms, providing stronger bounds than traditional linear programming relaxations. The current investigation provides a theoretical and computational analysis of Lagrangian relaxation bounds for the TSCPLP directly. Results are computed through a Lagrangian heuristic and CPLEX. The test problems for the computational analysis cover a range of problem size and strength of capacity constraints. This is achieved by scaling the ratio of total depot capacity to customer demand and the ratio of total plant capacity to total depot capacity on subsets of problem instances. The analysis shows that there are several constraints in the formulation that if dualized in a Lagrangian relaxation provide strong bounds on the optimal solution to the TSCPLP. This research has applications in solution techniques for the TSCPLP and can be extended to some transformations of the TSCPLP. These include the single-source TSCPLP, and the multi-commodity TSCPLP which accommodates for multiple products or services.
203

Data-Dependent Analysis of Learning Algorithms

Philips, Petra Camilla, petra.philips@gmail.com January 2005 (has links)
This thesis studies the generalization ability of machine learning algorithms in a statistical setting. It focuses on the data-dependent analysis of the generalization performance of learning algorithms in order to make full use of the potential of the actual training sample from which these algorithms learn.¶ First, we propose an extension of the standard framework for the derivation of generalization bounds for algorithms taking their hypotheses from random classes of functions. This approach is motivated by the fact that the function produced by a learning algorithm based on a random sample of data depends on this sample and is therefore a random function. Such an approach avoids the detour of the worst-case uniform bounds as done in the standard approach. We show that the mechanism which allows one to obtain generalization bounds for random classes in our framework is based on a “small complexity” of certain random coordinate projections. We demonstrate how this notion of complexity relates to learnability and how one can explore geometric properties of these projections in order to derive estimates of rates of convergence and good confidence interval estimates for the expected risk. We then demonstrate the generality of our new approach by presenting a range of examples, among them the algorithm-dependent compression schemes and the data-dependent luckiness frameworks, which fall into our random subclass framework.¶ Second, we study in more detail generalization bounds for a specific algorithm which is of central importance in learning theory, namely the Empirical Risk Minimization algorithm (ERM). Recent results show that one can significantly improve the high-probability estimates for the convergence rates for empirical minimizers by a direct analysis of the ERM algorithm. These results are based on a new localized notion of complexity of subsets of hypothesis functions with identical expected errors and are therefore dependent on the underlying unknown distribution. We investigate the extent to which one can estimate these high-probability convergence rates in a data-dependent manner. We provide an algorithm which computes a data-dependent upper bound for the expected error of empirical minimizers in terms of the “complexity” of data-dependent local subsets. These subsets are sets of functions of empirical errors of a given range and can be determined based solely on empirical data. We then show that recent direct estimates, which are essentially sharp estimates on the high-probability convergence rate for the ERM algorithm, can not be recovered universally from empirical data.
204

Trade, Unemployment and Labour Market Institutions

Kim, Jaewon January 2011 (has links)
The thesis consists of three papers, summarized as follows.        "The Determinants of Labour Market Institutions: A Panel Data Study"    This paper analyses the argument that labour market institutions can be thought of as devices for social insurance. It investigates the hypotheses that a country's exposure to external risk and ethnic fractionalisation are correlated with labor market institutions. Extreme bounds analysis with panel data of fourty years indicates that countries that are more open to international trade have stricter employment protection, strong unions, and a more coordinated wage bargaining process. Moreover, there is evidence that union density is negatively associated with the degree of ethnic fracationalisation.  "Why do Some Studies Show that Generous Unemployment Benefits Increase Unemployment Rates? A Meta-Analysis of Cross-Country Studies"    This paper investigates the hypothesis that generous unemployment benefits give rise to high levels of unemployment by systematically reviewing 34 cross-country studies. In contrast to conventional literature surveys, I perform a meta-analysis which applies regression techniques to a set of results taken from the existing literature. The main finding is that the choice of the primary data and estimation method matter for the final outcome. The control variables in the primary studies also affect the results. "The Effects of Trade on Unemployment: Evidence from 20 OECD countries"    This study empirically investigates if international trade has an impact on aggregate unemployment in the presence of labour market institutions. Using data for twenty OECD countries for the years 1961-2008, this study finds that an increase in trade leads to higher aggregate unemployment as it interacts with rigid labour market institutions, whereas it may reduce aggregate unemployment if the labour market is characterised by flexibility. In a country with the average degree of the labour market rigidities, an increase in trade has no significant effect on unemployment rates.
205

Essays in option pricing and interest rate models

Slinko, Irina January 2006 (has links)
<p>Diss. (sammanfattning) Stockholm : Handelshögskolan, 2006 [6], xiii, [1] s.: sammanfattning, s. 1-259, [5] s.: 4 uppsatser. Spikblad saknas</p>
206

Optimal investment in incomplete financial markets

Schachermayer, Walter January 2002 (has links) (PDF)
We give a review of classical and recent results on maximization of expected utility for an investor who has the possibility of trading in a financial market. Emphasis will be given to the duality theory related to this convex optimization problem. For expository reasons we first consider the classical case where the underlying probability space is finite. This setting has the advantage that the technical diffculties of the proofs are reduced to a minimum, which allows for a clearer insight into the basic ideas, in particular the crucial role played by the Legendre-transform. In this setting we state and prove an existence and uniqueness theorem for the optimal investment strategy, and its relation to the dual problem; the latter consists in finding an equivalent martingale measure optimal with respect to the conjugate of the utility function. We also discuss economic interpretations of these theorems. We then pass to the general case of an arbitrage-free financial market modeled by an R^d-valued semi-martingale. In this case some regularity conditions have to be imposed in order to obtain an existence result for the primal problem of finding the optimal investment, as well as for a proper duality theory. It turns out that one may give a necessary and sufficient condition, namely a mild condition on the asymptotic behavior of the utility function, its so-called reasonable asymptotic elasticity. This property allows for an economic interpretation motivating the term "reasonable". The remarkable fact is that this regularity condition only pertains to the behavior of the utility function, while we do not have to impose any regularity conditions on the stochastic process modeling the financial market (to be precise: of course, we have to require the arbitrage-freeness of this process in a proper sense; also we have to assume in one of the cases considered below that this process is locally bounded; but otherwise it may be an arbitrary R^d-valued semi-martingale). (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
207

A Methodology To Recover Unstable Aircraft From Post Stall Regimes: Design And Analysis

Saraf, Amitabh 03 1900 (has links)
This thesis deals with high angle of attack behaviour of a generic delta wing model aircraft. A high angle of attack wind tunnel database has been generated for this aircraft and based upon the bifurcation analysis of the data and the results of extensive simulations, it has been shown in the thesis that the post stall behaviour of this aircraft is both unstable and unpredictable. Unpredictability of aircraft behaviour arises from the fact that the aircraft response is oscillatory and divergent; the aircraft state trajectories do not settle down to any stable limit set and very often exceed valid aerodynamic database limits. This unpredictability of behaviour raises a major difficulty in the design of a procedure to recover the aircraft to normal flight regime in case the aircraft stalls and departs accidentally. A new methodology has been presented in this thesis to recover such unstable aircraft. In this methodology, a nonlinear controller is first designed at high angles of attack. This controller is connected by the pilot after the departure of the aircraft and the controller drives the aircraft to a well-defined spin condition. Thus, the controller makes the post stall aircraft behaviour predictable. Then a set of automatic recovery inputs is designed to reduce aircraft rotations and to lower the angle of attack. The present aircraft model is unstable at low angle of attack flight conditions as well and therefore to stabilize the aircraft to a low angle of attack level flight, another controller is designed. The high angle of attack controller is disconnected and the low angle of attack controller is connected automatically during the recovery process. The entire methodology is tested using extensive non-linear six degree-of-freedom simulations and the efficacy of the technique is established. The nonlinear controller that stabilizes the aircraft to a spin condition is designed using feedback linearization. The stability of a closed loop system obtained using feedback linearization is determined by the stability of the zero dynamics of the open loop plant. It has been shown in literature that the eigenvalues of the linearized zero dynamics are the same as the transmission zeros of the linearized plant at the equilibrium point. It is also well known that the location of transmission zeros of a linear system can be changed by the choice of outputs. In this thesis it is shown that if it is possible to reassign the outputs, then the feedback linearization based design for a linear system becomes very similar to a controller design for eigenvalue assignment. This thesis presents a new two-step procedure to obtain a locally stable and optimally robust closed loop system using feedback linearization. In the first step of this procedure optimal locations of the transmission zeros are found and in the second step, optimal outputs are constructed to place the system transmission zeros at these locations. The same outputs can then be used to construct nonlinear feedback for the nonlinear system and the resultant closed loop system is guaranteed to be locally robustly stable. The high angle of attack controller is designed using this procedure and its performance is presented in the thesis. The stabilized spin equilibrium point of the closed loop system is also shown to have a large domain of attraction. Having designed a locally robust stabilizing controller, the thesis addresses the problem of the evaluation of robustness of the stability of the equilibrium point in a nonlinear framework. The thesis presents a general method to construct bounds on the additive perturbations of the system vector field over a large region in the domain of attraction of a stable equilibrium point using Lyapunov functions. If the system perturbations lie within these bounds, the system is guaranteed to be stable. The thesis first proposes a method to numerically construct a Lyapunov function over a large region in the domain of attraction. In this method a sequence of Lyapunov functions are constructed such that each function in the sequence gives a larger estimate of the domain of attraction than the previous one. The seminal idea for this method is obtained from the existing literature and this idea is considerably generalized. Using this method, it is possible to numerically obtain a Lyapunov function value at each point in the domain of attraction, but the Lyapunov function does not have an analytical form. Hence, it is proposed to represent this function using neural networks. The thesis then discusses a new method to construct perturbation bounds. It is shown that the perturbation bounds obtained over a large region in the domain of attraction using a single Lyapunov function is too conservative. Using the concept of sequence of Lyapunov functions, the thesis proposes three methods to obtain the least conservative bounds for an initial local Lyapunov function. These general ideas are then applied to the aircraft example and the bounds on the perturbation of the aerodynamic database are presented.
208

Bounds On Augmented Automata And Quantum Adiabatic Optimization

Rao, M V Panduranga 02 1900 (has links)
Quantum computing has generated a lot of interested in the past two decades. Research into powerful models of quantum computation has yielded important and elegant results like an efficient algorithm for factoring and a quadratic speed-up for unordered search. At the same time, given the current difficulty in the physical implementation of these general purpose models, considerable effort has also been made in estimating the power of weaker models of quantum computation: models that have a small quantum component. The first part of this thesis is an investigation into the power of interference in quantum computing. While interference in probability amplitudes is a central feature even in powerful models, it is the only extra resource available to quantum finite automata. Of particular interest is interference in automata that have both classical and quantum states (2QCFA) as proposed by Ambainis and Watrous, since it inquires into the power of a classical deterministic finite automaton when augmented with a quantum component of constant size. Our contributions in this part are as follows: • To abstract out the phenomenon of interference in quantum computing, we propose a model called the 2-way Optical Interference Automata (2OIA). The model consists of a 2DFA augmented with a simple optical arrangement. We show different ways of harnessing the power of interference in the form of algorithms on this model to recognize some non-trivial languages. We then go on to show a language recognizable by a Turing machine using O(n2) space but by no 2OIA. • A natural classical model for comparison with 2QCFA is the weighted automaton, since it has the potential to capture interference in sum of path weights. Using the Cortes-Mohri definition of language recognition, we give an efficient simulation of 2QCFAwith algebraic amplitudes by weighted automata over the complex semi ring. • We introduce quantum non-determinism to the Measure-Once 1-way Quantum Finite Automata of Moore and Crutchfield and Kondacs and Watrous and show that even then, the model can recognize only regular languages with bounded error. • We propose a group theoretic generalization of counter automata that allows a notion of counter reversal complexity. To obtain this generalization, we combine concepts from classical counter automata theory with results in 2QCFA. We examine specific instances of this generalization and compare their ii iii powers. We also show an instance recognizing a language that is not recognized by conventional 2-way counter automata. Finally, we show a strict hierarchy among the 1-way versions of the instances Discussed. The second part of the thesis deals with Quantum Adiabatic Optimization algorithms. A common trick for designing faster quantum adiabatic algorithms is to apply the adiabatic condition locally at every instant. However it is often difficult to determine the instantaneous gap between the lowest two eigen values, which is an essential ingredient in the adiabatic condition. We present a simple linear algebraic technique for obtaining a lower bound on the instantaneous gap even in such a situation. As an illustration, we investigate the adiabatic unordered search of van Dam et al. and Roland and Cerf when the non-zero entries of the diagonal final Hamiltonian are perturbed by a polynomial (in logN, where N is the length of the unordered list) amount. We use our technique to derive a bound on the running time of a local adiabatic schedule in terms of the minimum gap between the lowest two eigenvalues.
209

Performance bounds in terms of estimation and resolution and applications in array processing

Tran, Nguyen Duy 24 September 2012 (has links) (PDF)
This manuscript concerns the performance analysis in signal processing and consists into two parts : First, we study the lower bounds in characterizing and predicting the estimation performance in terms of mean square error (MSE). The lower bounds on the MSE give the minimum variance that an estimator can expect to achieve and it can be divided into two categories depending on the parameter assumption: the so-called deterministic bounds dealing with the deterministic unknown parameters, and the so-called Bayesian bounds dealing with the random unknown parameter. Particularly, we derive the closed-form expressions of the lower bounds for two applications in two different fields: (i) The first one is the target localization using the multiple-input multiple-output (MIMO) radar in which we derive the lower bounds in the contexts with and without modeling errors, respectively. (ii) The other one is the pulse phase estimation of X-ray pulsars which is a potential solution for autonomous deep space navigation. In this application, we show the potential universality of lower bounds to tackle problems with parameterized probability density function (pdf) different from classical Gaussian pdf since in X-ray pulse phase estimation, observations are modeled with a Poisson distribution. Second, we study the statistical resolution limit (SRL) which is the minimal distance in terms of the parameter of interest between two signals allowing to correctly separate/estimate the parameters of interest. More precisely, we derive the SRL in two contexts: array processing and MIMO radar by using two approaches based on the estimation theory and information theory. We also present in this thesis the usefulness of SRL in optimizing the array system.
210

Adaptive Concatenated Coding for Wireless Real-Time Communications

Uhlemann, Elisabeth January 2004 (has links)
The objective of this thesis is to improve the performance of real-time communication overa wireless channel, by means of specifically tailored channel coding. The deadlinedependent coding (DDC) communication protocol presented here lets the timeliness and thereliability of the delivered information constitute quality of service (QoS) parametersrequested by the application. The values of these QoS parameters are transformed intoactions taken by the link layer protocol in terms of adaptive coding strategies.Incremental redundancy hybrid automatic repeat request (IR-HARQ) schemes usingrate compatible punctured codes are appealing since no repetition of previously transmittedbits is made. Typically, IR-HARQ schemes treat the packet lengths as fixed and maximizethe throughput by optimizing the puncturing pattern, i.e. the order in which the coded bitsare transmitted. In contrast, we define an IR strategy as the maximum number of allowedtransmissions and the number of code bits to include in each transmission. An approach isthen suggested to find the optimal IR strategy that maximizes the average code rate, i.e., theoptimal partitioning of n-kparity bits over at most M transmissions, assuming a givenpuncturing pattern. Concatenated coding used in IR-HARQ schemes provides a new arrayof possibilities for adaptability in terms of decoding complexity and communication timeversus reliability. Hence, critical reliability and timing constraints can be readily evaluatedas a function of available system resources. This in turn enables quantifiable QoS and thusnegotiable QoS. Multiple concatenated single parity check codes are chosen as examplecodes due to their very low decoding complexity. Specific puncturing patterns for thesecomponent codes are obtained using union bounds based on uniform interleavers. Thepuncturing pattern that has the best performance in terms of frame error rate (FER) at a lowsignal-to-noise ratio (SNR) is chosen. Further, using extrinsic information transfer (EXIT)analysis, rate compatible puncturing ratios for the constituent component code are found.The puncturing ratios are chosen to minimize the SNR required for convergence.The applications targeted in this thesis are not necessarily replacement of cables inexisting wired systems. Instead the motivation lies in the new services that wireless real-time communication enables. Hence, communication within and between cooperatingembedded systems is typically the focus. The resulting IR-HARQ-DDC protocol presentedhere is an efficient and fault tolerant link layer protocol foundation using adaptiveconcatenated coding intended specifically for wireless real-time communications. / Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie, 2198, Technical report. D, 29,

Page generated in 0.0377 seconds