• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 14
  • 14
  • 13
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 54
  • 45
  • 44
  • 42
  • 39
  • 35
  • 28
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Adaptive Load Management: Multi-Layered And Multi-Temporal Optimization Of The Demand Side In Electric Energy Systems

Joo, Jhi-Young 01 September 2013 (has links)
Well-designed demand response is expected to play a vital role in operatingpower systems by reducing economic and environmental costs. However,the current system is operated without much information on the benefits ofend-users, especially the small ones, who use electricity. This thesis proposes aframework of operating power systems with demand models including the diversityof end-users’ benefits, namely adaptive load management (ALM). Sincethere are a large number of end-users having different preferences and conditionsin energy consumption, the information on the end-users’ benefits needsto be aggregated at the system level. This leads us to model the system ina multi-layered way, including end-users, load serving entities, and a systemoperator. On the other hand, the information of the end-users’ benefits can beuncertain even to the end-users themselves ahead of time. This information isdiscovered incrementally as the actual consumption approaches and occurs. Forthis reason ALM requires a multi-temporal model of a system operation andend-users’ benefits within. Due to the different levels of uncertainty along thedecision-making time horizons, the risks from the uncertainty of informationon both the system and the end-users need to be managed. The methodologyof ALM is based on Lagrange dual decomposition that utilizes interactive communicationbetween the system, load serving entities, and end-users. We showthat under certain conditions, a power system with a large number of end-userscan balance at its optimum efficiently over the horizon of a day ahead of operationto near real time. Numerical examples include designing ALM for theright types of loads over different time horizons, and balancing a system with a large number of different loads on a congested network. We conclude thatwith the right information exchange by each entity in the system over differenttime horizons, a power system can reach its optimum including a variety ofend-users’ preferences and their values of consuming electricity.
72

Modélisation du langage à l'aide de pénalités structurées

Nelakanti, Anil Kumar 11 February 2014 (has links) (PDF)
Modeling natural language is among fundamental challenges of artificial intelligence and the design of interactive machines, with applications spanning across various domains, such as dialogue systems, text generation and machine translation. We propose a discriminatively trained log-linear model to learn the distribution of words following a given context. Due to data sparsity, it is necessary to appropriately regularize the model using a penalty term. We design a penalty term that properly encodes the structure of the feature space to avoid overfitting and improve generalization while appropriately capturing long range dependencies. Some nice properties of specific structured penalties can be used to reduce the number of parameters required to encode the model. The outcome is an efficient model that suitably captures long dependencies in language without a significant increase in time or space requirements. In a log-linear model, both training and testing become increasingly expensive with growing number of classes. The number of classes in a language model is the size of the vocabulary which is typically very large. A common trick is to cluster classes and apply the model in two-steps; the first step picks the most probable cluster and the second picks the most probable word from the chosen cluster. This idea can be generalized to a hierarchy of larger depth with multiple levels of clustering. However, the performance of the resulting hierarchical classifier depends on the suitability of the clustering to the problem. We study different strategies to build the hierarchy of categories from their observations.
73

Convex Optimization Methods for System Identification

Dautbegovic, Dino January 2014 (has links)
The extensive use of a least-squares problem formulation in many fields is partly motivated by the existence of an analytic solution formula which makes the theory comprehensible and readily applicable, but also easily embedded in computer-aided design or analysis tools. While the mathematics behind convex optimization has been studied for about a century, several recent researches have stimulated a new interest in the topic. Convex optimization, being a special class of mathematical optimization problems, can be considered as generalization of both least-squares and linear programming. As in the case of a linear programming problem there is in general no simple analytical formula that can be used to find the solution of a convex optimization problem. There exists however efficient methods or software implementations for solving a large class of convex problems. The challenge and the state of the art in using convex optimization comes from the difficulty in recognizing and formulating the problem. The main goal of this thesis is to investigate the potential advantages and benefits of convex optimization techniques in the field of system identification. The primary work focuses on parametric discrete-time system identification models in which we assume or choose a specific model structure and try to estimate the model parameters for best fit using experimental input-output (IO) data. By developing a working knowledge of convex optimization and treating the system identification problem as a convex optimization problem will allow us to reduce the uncertainties in the parameter estimation. This is achieved by reecting prior knowledge about the system in terms of constraint functions in the least-squares formulation.
74

BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION / Beamforming using CVX

Jangam, Ravindra nath vijay kumar January 2014 (has links)
The thesis analyses and validates Beamforming methods using Convex Optimization.  CVX which is a Matlab supported tool for convex optimization has been used to develop this concept. An algorithm is designed by which an appropriate system has been identified by varying parameters such as number of antennas, passband width, and stopbands widths of a beamformer. We have observed the beamformer by minimizing the error for Least-square and Infinity norms. A graph obtained by the optimum values between least-square and infinity norms shows us a trade-off between these two norms. We have observed convex optimization for double passband of a beamformer which has proven the flexibility of convex optimization. On extension for this, we designed a filter in which stopband is arbitrary. A constraint is used by which the stopband would be varying depending upon the upper boundary (limiting) line which varies w.r.t y-axis (dB). The beamformer has been observed for feasibility by varying parameters such as number of antennas, arbitrary upper boundaries, stopbands and passband. This proves that there is flexibility for designing a beamformer as desired.
75

Convex optimization under inexact first-order information

Lan, Guanghui 29 June 2009 (has links)
In this thesis we investigate the design and complexity analysis of the algorithms to solve convex programming problems under inexact first-order information. In the first part of this thesis we focus on the general non-smooth convex minimization under a stochastic oracle. We start by introducing an important algorithmic advancement in this area, namely, the development of the mirror descent stochastic approximation algorithm. The main contribution is to develop a validation procedure for this algorithm applied to stochastic programming. In the second part of this thesis we consider the Stochastic Composite Optimizaiton (SCO) which covers smooth, non-smooth and stochastic convex optimization as certain special cases. Note that the optimization algorithms that can achieve this lower bound had never been developed. Our contribution in this topic mainly consists of the following aspects. Firstly, with a novel analysis, it is demonstrated that the simple RM-SA algorithm applied to the aforementioned problems exhibits the best known so far rate of convergence. Moreover, by adapting Nesterov's optimal method, we propose an accelerated SA, which can achieve, uniformly in dimension, the theoretically optimal rate of convergence for solving this class of problems. Finally, the significant advantages of the accelerated SA over the existing algorithms are illustrated in the context of solving a class of stochastic programming problems. In the last part of this work, we extend our attention to certain deterministic optimization techniques which operate on approximate first-order information for the dual problem. In particular, we establish, for the first time in the literature, the iteration-complexity for the inexact augmented Lagrangian (I-AL) methods applied to a special class of convex programming problems.
76

Discovery of low-dimensional structure in high-dimensional inference problems

Aksoylar, Cem 10 March 2017 (has links)
Many learning and inference problems involve high-dimensional data such as images, video or genomic data, which cannot be processed efficiently using conventional methods due to their dimensionality. However, high-dimensional data often exhibit an inherent low-dimensional structure, for instance they can often be represented sparsely in some basis or domain. The discovery of an underlying low-dimensional structure is important to develop more robust and efficient analysis and processing algorithms. The first part of the dissertation investigates the statistical complexity of sparse recovery problems, including sparse linear and nonlinear regression models, feature selection and graph estimation. We present a framework that unifies sparse recovery problems and construct an analogy to channel coding in classical information theory. We perform an information-theoretic analysis to derive bounds on the number of samples required to reliably recover sparsity patterns independent of any specific recovery algorithm. In particular, we show that sample complexity can be tightly characterized using a mutual information formula similar to channel coding results. Next, we derive major extensions to this framework, including dependent input variables and a lower bound for sequential adaptive recovery schemes, which helps determine whether adaptivity provides performance gains. We compute statistical complexity bounds for various sparse recovery problems, showing our analysis improves upon the existing bounds and leads to intuitive results for new applications. In the second part, we investigate methods for improving the computational complexity of subgraph detection in graph-structured data, where we aim to discover anomalous patterns present in a connected subgraph of a given graph. This problem arises in many applications such as detection of network intrusions, community detection, detection of anomalous events in surveillance videos or disease outbreaks. Since optimization over connected subgraphs is a combinatorial and computationally difficult problem, we propose a convex relaxation that offers a principled approach to incorporating connectivity and conductance constraints on candidate subgraphs. We develop a novel nearly-linear time algorithm to solve the relaxed problem, establish convergence and consistency guarantees and demonstrate its feasibility and performance with experiments on real networks.
77

Operator splitting methods for convex optimization : analysis and implementation

Banjac, Goran January 2018 (has links)
Convex optimization problems are a class of mathematical problems which arise in numerous applications. Although interior-point methods can in principle solve these problems efficiently, they may become intractable for solving large-scale problems or be unsuitable for real-time embedded applications. Iterations of operator splitting methods are relatively simple and computationally inexpensive, which makes them suitable for these applications. However, some of their known limitations are slow asymptotic convergence, sensitivity to ill-conditioning, and inability to detect infeasible problems. The aim of this thesis is to better understand operator splitting methods and to develop reliable software tools for convex optimization. The main analytical tool in our investigation of these methods is their characterization as the fixed-point iteration of a nonexpansive operator. The fixed-point theory of nonexpansive operators has been studied for several decades. By exploiting the properties of such an operator, it is possible to show that the alternating direction method of multipliers (ADMM) can detect infeasible problems. Although ADMM iterates diverge when the problem at hand is unsolvable, the differences between subsequent iterates converge to a constant vector which is also a certificate of primal and/or dual infeasibility. Reliable termination criteria for detecting infeasibility are proposed based on this result. Similar ideas are used to derive necessary and sufficient conditions for linear (geometric) convergence of an operator splitting method and a bound on the achievable convergence rate. The new bound turns out to be tight for the class of averaged operators. Next, the OSQP solver is presented. OSQP is a novel general-purpose solver for quadratic programs (QPs) based on ADMM. The solver is very robust, is able to detect infeasible problems, and has been extensively tested on many problem instances from a wide variety of application areas. Finally, operator splitting methods can also be effective in nonconvex optimization. The developed algorithm significantly outperforms a common approach based on convex relaxation of the original nonconvex problem.
78

Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Torkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs. In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data. Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs. In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary. While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems. As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional. Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
79

Optimal regression design under second-order least squares estimator: theory, algorithm and applications

Yeh, Chi-Kuang 23 July 2018 (has links)
In this thesis, we first review the current development of optimal regression designs under the second-order least squares estimator in the literature. The criteria include A- and D-optimality. We then introduce a new formulation of A-optimality criterion so the result can be extended to c-optimality which has not been studied before. Following Kiefer's equivalence results, we derive the optimality conditions for A-, c- and D-optimal designs under the second-order least squares estimator. In addition, we study the number of support points for various regression models including Peleg models, trigonometric models, regular and fractional polynomial models. A generalized scale invariance property for D-optimal designs is also explored. Furthermore, we discuss one computing algorithm to find optimal designs numerically. Several interesting applications are presented and related MATLAB code are provided in the thesis. / Graduate
80

A Convex Approach for Stability Analysis of Partial Differential Equations

January 2016 (has links)
abstract: A computational framework based on convex optimization is presented for stability analysis of systems described by Partial Differential Equations (PDEs). Specifically, two forms of linear PDEs with spatially distributed polynomial coefficients are considered. The first class includes linear coupled PDEs with one spatial variable. Parabolic, elliptic or hyperbolic PDEs with Dirichlet, Neumann, Robin or mixed boundary conditions can be reformulated in order to be used by the framework. As an example, the reformulation is presented for systems governed by Schr¨odinger equation, parabolic type, relativistic heat conduction PDE and acoustic wave equation, hyperbolic types. The second form of PDEs of interest are scalar-valued with two spatial variables. An extra spatial variable allows consideration of problems such as local stability of fluid flows in channels and dynamics of population over two dimensional domains. The approach does not involve discretization and is based on using Sum-of-Squares (SOS) polynomials and positive semi-definite matrices to parameterize operators which are positive on function spaces. Applying the parameterization to construct Lyapunov functionals with negative derivatives allows to express stability conditions as a set of LinearMatrix Inequalities (LMIs). The MATLAB package SOSTOOLS was used to construct the LMIs. The resultant LMIs then can be solved using existent Semi-Definite Programming (SDP) solvers such as SeDuMi or MOSEK. Moreover, the proposed approach allows to calculate bounds on the rate of decay of the solution norm. The methodology is tested using several numerical examples and compared with the results obtained from simulation using standard methods of numerical discretization and analytic solutions. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2016

Page generated in 0.1352 seconds