• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 226
  • 226
  • 53
  • 45
  • 42
  • 39
  • 38
  • 32
  • 25
  • 24
  • 24
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Applications of accuracy certificates for problems with convex structure

Cox, Bruce 21 February 2011 (has links)
Applications of accuracy certificates for problems with convex structure   This dissertation addresses the efficient generation and potential applications of accuracy certificates in the framework of “black-box-represented” convex optimization problems - convex problems where the objective and the constraints are represented by  “black boxes” which, given on input a value x of the argument, somehow (perhaps in a fashion unknown to the user) provide on output the values and the derivatives of the objective and the constraints at x. The main body of the dissertation can be split into three parts.  In the first part, we provide our background --- state of the art of the theory of accuracy certificates for black-box-represented convex optimization. In the second part, we extend the toolbox of black-box-oriented convex optimization algorithms with accuracy certificates by equipping with these certificates a state-of-the-art algorithm for large-scale nonsmooth black-box-represented problems with convex structure, specifically, the Non-Euclidean Restricted Memory Level (NERML) method. In the third part, we present several novel academic applications of accuracy certificates. The dissertation is organized as follows: In Chapter 1, we motivate our research goals and present a detailed summary of our results. In Chapter 2, we outline the relevant background, specifically, describe four generic black-box-represented generic problems with convex structure (Convex Minimization, Convex-Concave Saddle Point, Convex Nash Equilibrium, and Variational Inequality with Monotone Operator), and outline the existing theory of accuracy certificates for these problems. In Chapter 3, we develop techniques for equipping with on-line accuracy certificates the state-of-the-art NERML algorithm for large-scale nonsmooth problems with convex structure, both in the cases when the domain of the problem is a simple solid and in the case when the domain is given by Separation oracle. In Chapter 4, we develop  several novel academic applications of accuracy certificates, primarily to (a) efficient certifying emptiness of the intersection of finitely many solids given by Separation oracles, and (b) building efficient algorithms for convex minimization over solids given by Linear Optimization oracles (both precise and approximate). In Chapter 5, we apply accuracy certificates to efficient decomposition of “well structured” convex-concave saddle point problems, with applications to computationally attractive decomposition of a large-scale LP program with the constraint matrix which becomes block-diagonal after eliminating a relatively small number of possibly dense columns (corresponding to “linking variables”) and possibly dense rows (corresponding to “linking constraints”).
52

Machine Vision and Autonomous Integration Into an Unmanned Aircraft System

Van Horne, Chris 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / The University of Arizona's Aerial Robotics Club (ARC) sponsors the development of an unmanned aerial vehicle (UAV) able to compete in the annual Association for Unmanned Vehicle Systems International (AUVSI) Seafarer Chapter Student Unmanned Aerial Systems competition. Modern programming frameworks are utilized to develop a robust distributed imagery and telemetry pipeline as a backend for a mission operator user interface. This paper discusses the design changes made for the 2013 AUVSI competition including integrating low-latency first-person view, updates to the distributed task backend, and incremental and asynchronous updates the operator's user interface for real-time data analysis.
53

Unifying Low-Rank Models for Visual Learning

Cabral, Ricardo da Silveira 01 February 2015 (has links)
Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard-rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matrices of fixed rank. Whilst factorization approaches lead to efficient and kernelizable optimization algorithms, they have been shown to be NP-Hard in presence of missing data. Inspired by recent work in compressed sensing, hard-rank constraints have been replaced by soft-rank constraints, such as the nuclear norm regularizer. Vis-a-vis hard-rank approaches, soft-rank models are convex even in presence of missing data: but how is convex optimization solving a NP-Hard problem? This thesis addresses this question by analyzing the relationship between hard and soft rank constraints in the unsupervised factorization with missing data problem. Moreover, we extend soft rank models to weakly supervised and fully supervised learning problems in computer vision. There are four main contributions of our work: (1) The analysis of a new unified low-rank model for matrix factorization with missing data. Our model subsumes soft and hard-rank approaches and merges advantages from previous formulations, such as efficient algorithms and kernelization. It also provides justifications on the choice of algorithms and regions that guarantee convergence to global minima. (2) A deterministic \rank continuation" strategy for the NP-hard unsupervised factorization with missing data problem, that is highly competitive with the state-of-the-art and often achieves globally optimal solutions. In preliminary work, we show that this optimization strategy is applicable to other NP-hard problems which are typically relaxed to convex semidentite programs (e.g., MAX-CUT, quadratic assignment problem). (3) A new soft-rank fully supervised robust regression model. This convex model is able to deal with noise, outliers and missing data in the input variables. (4) A new soft-rank model for weakly supervised image classification and localization. Unlike existing multiple-instance approaches for this problem, our model is convex.
54

Structured sparsity-inducing norms : statistical and algorithmic properties with applications to neuroimaging

Jenatton, Rodolphe 24 November 2011 (has links) (PDF)
Numerous fields of applied sciences and industries have been recently witnessing a process of digitisation. This trend has come with an increase in the amount digital data whose processing becomes a challenging task. In this context, parsimony, also known as sparsity, has emerged as a key concept in machine learning and signal processing. It is indeed appealing to exploit data only via a reduced number of parameters. This thesis focuses on a particular and more recent form of sparsity, referred to as structured sparsity. As its name indicates, we shall consider situations where we are not only interested in sparsity, but where some structural prior knowledge is also available. The goal of this thesis is to analyze the concept of structured sparsity, based on statistical, algorithmic and applied considerations. To begin with, we introduce a family of structured sparsity-inducing norms whose statistical aspects are closely studied. In particular, we show what type of prior knowledge they correspond to. We then turn to sparse structured dictionary learning, where we use the previous norms within the framework of matrix factorization. From an optimization viewpoint, we derive several efficient and scalable algorithmic tools, such as working-set strategies and proximal-gradient techniques. With these methods in place, we illustrate on numerous real-world applications from various fields, when and why structured sparsity is useful. This includes, for instance, restoration tasks in image processing, the modelling of text documents as hierarchy of topics, the inter-subject prediction of sizes of objects from fMRI signals, and background-subtraction problems in computer vision.
55

Sparse coding for machine learning, image processing and computer vision

Mairal, Julien 30 November 2010 (has links) (PDF)
We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
56

Distance Measurement-Based Cooperative Source Localization: A Convex Range-Free Approach

Kiraz, Fatma January 2013 (has links)
One of the most essential objectives in WSNs is to determine the spatial coordinates of a source or a sensor node having information. In this study, the problem of range measurement-based localization of a signal source or a sensor is revisited. The main challenge of the problem results from the non-convexity associated with range measurements calculated using the distances from the set of nodes with known positions to a xed sen- sor node. Such measurements corresponding to certain distances are non-convex in two and three dimensions. Attempts recently proposed in the literature to eliminate the non- convexity approach the problem as a non-convex geometric minimization problem, using techniques to handle the non-convexity. This study proposes a new fuzzy range-free sensor localization method. The method suggests using some notions of Euclidean geometry to convert the problem into a convex geometric problem. The convex equivalent problem is built using convex fuzzy sets, thus avoiding multiple stable local minima issues, then a gradient based localization algorithm is chosen to solve the problem. Next, the proposed algorithm is simulated considering various scenarios, including the number of available source nodes, fuzzi cation level, and area coverage. The results are compared with an algorithm having similar fuzzy logic settings. Also, the behaviour of both algorithms with noisy measurements are discussed. Finally, future extensions of the algorithm are suggested, along with some guidelines.
57

Adaptive Load Management: Multi-Layered And Multi-Temporal Optimization Of The Demand Side In Electric Energy Systems

Joo, Jhi-Young 01 September 2013 (has links)
Well-designed demand response is expected to play a vital role in operatingpower systems by reducing economic and environmental costs. However,the current system is operated without much information on the benefits ofend-users, especially the small ones, who use electricity. This thesis proposes aframework of operating power systems with demand models including the diversityof end-users’ benefits, namely adaptive load management (ALM). Sincethere are a large number of end-users having different preferences and conditionsin energy consumption, the information on the end-users’ benefits needsto be aggregated at the system level. This leads us to model the system ina multi-layered way, including end-users, load serving entities, and a systemoperator. On the other hand, the information of the end-users’ benefits can beuncertain even to the end-users themselves ahead of time. This information isdiscovered incrementally as the actual consumption approaches and occurs. Forthis reason ALM requires a multi-temporal model of a system operation andend-users’ benefits within. Due to the different levels of uncertainty along thedecision-making time horizons, the risks from the uncertainty of informationon both the system and the end-users need to be managed. The methodologyof ALM is based on Lagrange dual decomposition that utilizes interactive communicationbetween the system, load serving entities, and end-users. We showthat under certain conditions, a power system with a large number of end-userscan balance at its optimum efficiently over the horizon of a day ahead of operationto near real time. Numerical examples include designing ALM for theright types of loads over different time horizons, and balancing a system with a large number of different loads on a congested network. We conclude thatwith the right information exchange by each entity in the system over differenttime horizons, a power system can reach its optimum including a variety ofend-users’ preferences and their values of consuming electricity.
58

Modélisation du langage à l'aide de pénalités structurées

Nelakanti, Anil Kumar 11 February 2014 (has links) (PDF)
Modeling natural language is among fundamental challenges of artificial intelligence and the design of interactive machines, with applications spanning across various domains, such as dialogue systems, text generation and machine translation. We propose a discriminatively trained log-linear model to learn the distribution of words following a given context. Due to data sparsity, it is necessary to appropriately regularize the model using a penalty term. We design a penalty term that properly encodes the structure of the feature space to avoid overfitting and improve generalization while appropriately capturing long range dependencies. Some nice properties of specific structured penalties can be used to reduce the number of parameters required to encode the model. The outcome is an efficient model that suitably captures long dependencies in language without a significant increase in time or space requirements. In a log-linear model, both training and testing become increasingly expensive with growing number of classes. The number of classes in a language model is the size of the vocabulary which is typically very large. A common trick is to cluster classes and apply the model in two-steps; the first step picks the most probable cluster and the second picks the most probable word from the chosen cluster. This idea can be generalized to a hierarchy of larger depth with multiple levels of clustering. However, the performance of the resulting hierarchical classifier depends on the suitability of the clustering to the problem. We study different strategies to build the hierarchy of categories from their observations.
59

Convex Optimization Methods for System Identification

Dautbegovic, Dino January 2014 (has links)
The extensive use of a least-squares problem formulation in many fields is partly motivated by the existence of an analytic solution formula which makes the theory comprehensible and readily applicable, but also easily embedded in computer-aided design or analysis tools. While the mathematics behind convex optimization has been studied for about a century, several recent researches have stimulated a new interest in the topic. Convex optimization, being a special class of mathematical optimization problems, can be considered as generalization of both least-squares and linear programming. As in the case of a linear programming problem there is in general no simple analytical formula that can be used to find the solution of a convex optimization problem. There exists however efficient methods or software implementations for solving a large class of convex problems. The challenge and the state of the art in using convex optimization comes from the difficulty in recognizing and formulating the problem. The main goal of this thesis is to investigate the potential advantages and benefits of convex optimization techniques in the field of system identification. The primary work focuses on parametric discrete-time system identification models in which we assume or choose a specific model structure and try to estimate the model parameters for best fit using experimental input-output (IO) data. By developing a working knowledge of convex optimization and treating the system identification problem as a convex optimization problem will allow us to reduce the uncertainties in the parameter estimation. This is achieved by reecting prior knowledge about the system in terms of constraint functions in the least-squares formulation.
60

BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION / Beamforming using CVX

Jangam, Ravindra nath vijay kumar January 2014 (has links)
The thesis analyses and validates Beamforming methods using Convex Optimization.  CVX which is a Matlab supported tool for convex optimization has been used to develop this concept. An algorithm is designed by which an appropriate system has been identified by varying parameters such as number of antennas, passband width, and stopbands widths of a beamformer. We have observed the beamformer by minimizing the error for Least-square and Infinity norms. A graph obtained by the optimum values between least-square and infinity norms shows us a trade-off between these two norms. We have observed convex optimization for double passband of a beamformer which has proven the flexibility of convex optimization. On extension for this, we designed a filter in which stopband is arbitrary. A constraint is used by which the stopband would be varying depending upon the upper boundary (limiting) line which varies w.r.t y-axis (dB). The beamformer has been observed for feasibility by varying parameters such as number of antennas, arbitrary upper boundaries, stopbands and passband. This proves that there is flexibility for designing a beamformer as desired.

Page generated in 0.132 seconds