• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 757
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1603
  • 586
  • 338
  • 242
  • 242
  • 235
  • 190
  • 184
  • 176
  • 167
  • 166
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Dimension Reduction and LASSO using Pointwise and Group Norms

Jutras, Melanie A 11 December 2018 (has links)
Principal Components Analysis (PCA) is a statistical procedure commonly used for the purpose of analyzing high dimensional data. It is often used for dimensionality reduction, which is accomplished by determining orthogonal components that contribute most to the underlying variance of the data. While PCA is widely used for identifying patterns and capturing variability of data in lower dimensions, it has some known limitations. In particular, PCA represents its results as linear combinations of data attributes. PCA is therefore, often seen as difficult to interpret and because of the underlying optimization problem that is being solved it is not robust to outliers. In this thesis, we examine extensions to PCA that address these limitations. Specific techniques researched in this thesis include variations of Robust and Sparse PCA as well as novel combinations of these two methods which result in a structured low-rank approximation that is robust to outliers. Our work is inspired by the well known machine learning methods of Least Absolute Shrinkage and Selection Operator (LASSO) as well as pointwise and group matrix norms. Practical applications including robust and non-linear methods for anomaly detection in Domain Name System network data as well as interpretable feature selection with respect to a website classification problem are discussed along with implementation details and techniques for analysis of regularization parameters.
82

Robust Auto-encoders

Zhou, Chong 27 April 2016 (has links)
In this thesis, our aim is to improve deep auto-encoders, an important topic in the deep learning area, which has shown connections to latent feature discovery models in the literature. Our model is inspired by robust principal component analysis, and we build an outlier filter on the top of basic deep auto-encoders. By adding this filter, we can split the input data X into two parts X=L+S, where the L could be better reconstructed by a deep auto-encoder and the S contains the anomalous parts of the original data X. Filtering out the anomalies increases the robustness of the standard auto-encoder, and thus we name our model ``Robust Auto-encoder'. We also propose a novel solver for the robust auto-encoder which alternatively optimizes the reconstruction cost of the deep auto-encoder and the sparsity of outlier filter in pursuit of finding the optimal solution. This solver is inspired by the Alternating Direction Method of Multipliers, Back-propagation and the Alternating Projection method, and we demonstrate the convergence properties of this algorithm and its superior performance in standard image recognition tasks. Last but not least, we apply our model to multiple domains, especially, the cyber-data analysis, where deep models are seldom currently used.
83

Distributionally Robust Performance Analysis with Applications to Mine Valuation and Risk

Dolan, Christopher James January 2017 (has links)
We consider several problems motivated by issues faced in the mining industry. In recent years, it has become clear that mines have substantial tail risk in the form of environmental disasters, and this tail risk is not incorporated into common pricing and risk models. However, data sets of the extremal climate behavior that drive this risk are very small, and generally inadequate for properly estimating the tail behavior. We propose a data-driven methodology that comes up with reasonable worst-case scenarios, given the data size constraints, and we incorporate this into a real options based model for the valuation of mines. We propose several different iterations of the model, to allow the end-user to choose the degree to which they wish to specify the financial consequences of the disaster scenario. Next, in order to perform a risk analysis on a portfolio of mines, we propose a method of estimating the correlation structure of high-dimensional max-stable processes. Using the techniques of (Liu Et al, 2017) to map the relationship between normal correlations and max-stable correlations, we can then use techniques inspired by (Bickel et al, 2008, Liu et al, 2014, Rothman et al, 2009) to estimate the underlying correlation matrix, while preserving a sparse, positive-definite structure. The correlation matrices are then used in the calculation of model-robust risk metrics (VaR, CVAR) using the the Sample-Out-of-Sample methodology (Blanchet and Kang, 2017). We conclude with several new techniques that were developed in the field of robust performance analysis, that while not directly applied to mining, were motivated by our studies into distributionally robust optimization in order to address these problems.
84

Nonconvex Recovery of Low-complexity Models

Qu, Qing January 2018 (has links)
Today we are living in the era of big data, there is a pressing need for efficient, scalable and robust optimization methods to analyze the data we create and collect. Although Convex methods offer tractable solutions with global optimality, heuristic nonconvex methods are often more attractive in practice due to their superior efficiency and scalability. Moreover, for better representations of the data, the mathematical model we are building today are much more complicated, which often results in highly nonlinear and nonconvex optimizations problems. Both of these challenges require us to go beyond convex optimization. While nonconvex optimization is extraordinarily successful in practice, unlike convex optimization, guaranteeing the correctness of nonconvex methods is notoriously difficult. In theory, even finding a local minimum of a general nonconvex function is NP-hard – nevermind the global minimum. This thesis aims to bridge the gap between practice and theory of nonconvex optimization, by developing global optimality guarantees for nonconvex problems arising in real-world engineering applications, and provable, efficient nonconvex optimization algorithms. First, this thesis reveals that for certain nonconvex problems we can construct a model specialized initialization that is close to the optimal solution, so that simple and efficient methods provably converge to the global solution with linear rate. These problem include sparse basis learning and convolutional phase retrieval. In addition, the work has led to the discovery of a broader class of nonconvex problems – the so-called ridable saddle functions. Those problems possess characteristic structures, in which (i) all local minima are global, (ii) the energy landscape does not have any ''flat'' saddle points. More interestingly, when data are large and random, this thesis reveals that many problems in the real world are indeed ridable saddle, those problems include complete dictionary learning and generalized phase retrieval. For each of the aforementioned problems, the benign geometric structure allows us to obtain global recovery guarantees by using efficient optimization methods with arbitrary initialization.
85

Robust approach to risk management and statistical analysis.

January 2012 (has links)
博士論文著重研究關於多項式優化的理論,並討論其在風險管理及統計分析中的應用。我們主要研究的對象乃為在控制理論和穩健優化中常見的所謂S 引理。原始的S 引理最早由Yakubovich 所引入。它給出一個二吹多項式在另一個二吹多項式的非負域上為非負的等價條件。在本論文中,我們把S 引理推廣到一元高吹多項式。由於S 引理與穩健優化密切相關,所以我們的結果可廣泛應用於風險管理及統計分析,包括估算在高階矩約束下的非線性風險量度問題,以及利用半正定規劃來計算同時置信區域帶等重要課題。同時,在相關章節的末段,我們以數值實驗結果來引證有關的新理論的有效性和應用前景。 / In this thesis we study some structural results in polynomial optimization, with an emphasis paid to the applications from risk management problems and estimations in statistical analysis. The key underlying method being studied is related to the so-called S-lemma in control theory and robust optimization. The original S-lemma was developed by Yakubovich, which states an equivalent condition for a quadratic polynomial to be non-negative over the non-negative domain of other quadratic polynomial(s). In this thesis, we extend the S-Lemma to univariate polynomials of any degree. Since robust optimization has a strong connection to the S-Lemma, our results lead to many applications in risk management and statistical analysis, including estimating certain nonlinear risk measures under moment bound constraints, and an SDP formulation for simultaneous confidence bands. Numerical experiments are conducted and presented to illustrate the effectiveness of the methods. / Detailed summary in vernacular field only. / Wong, Man Hong. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 134-147). / Abstract also in Chinese. / Abstract --- p.i / 摘要 --- p.ii / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Meeting the S-Lemma --- p.5 / Chapter 3 --- A strongly robust formulation --- p.13 / Chapter 3.1 --- A more practical extension for robust optimization --- p.13 / Chapter 3.1.1 --- Motivation from modeling aspect --- p.13 / Chapter 3.1.2 --- Discussion of a more robust condition --- p.15 / Chapter 4 --- Theoretical developments --- p.19 / Chapter 4.1 --- Definition of several order relations --- p.19 / Chapter 4.2 --- S-Lemma with a single condition g(x)≥0 --- p.20 / Chapter 5 --- Confidence bands in polynomial regression --- p.47 / Chapter 5.1 --- An introduction --- p.47 / Chapter 5.1.1 --- A review on robust optimization, nonnegative polynomials and SDP --- p.49 / Chapter 5.1.2 --- A review on the confidence bands --- p.50 / Chapter 5.1.3 --- Our contribution --- p.51 / Chapter 5.2 --- Some preliminaries on optimization --- p.52 / Chapter 5.2.1 --- Robust optimization --- p.52 / Chapter 5.2.2 --- Semidefinite programming and LMIs --- p.53 / Chapter 5.2.3 --- Nonnegative polynomials with SDP --- p.55 / Chapter 5.3 --- Some preliminaries on linear regression and confidence region --- p.59 / Chapter 5.4 --- Optimization approach to the confidence bands construction --- p.63 / Chapter 5.5 --- Numerical experiments --- p.66 / Chapter 5.5.1 --- Linear regression example --- p.66 / Chapter 5.5.2 --- Polynomial regression example --- p.67 / Chapter 5.6 --- Conclusion --- p.70 / Chapter 6 --- Moment bound of nonlinear risk measures --- p.72 / Chapter 6.1 --- Introduction --- p.72 / Chapter 6.1.1 --- Motivation --- p.72 / Chapter 6.1.2 --- Robustness and moment bounds --- p.74 / Chapter 6.1.3 --- Literature review in general --- p.76 / Chapter 6.1.4 --- More literature review in actuarial science --- p.78 / Chapter 6.1.5 --- Our contribution --- p.79 / Chapter 6.2 --- Methodological fundamentals behind the moment bounds --- p.81 / Chapter 6.2.1 --- Dual formulations, duality and tight bounds --- p.82 / Chapter 6.2.2 --- SDP and LMIs for some dual problems --- p.84 / Chapter 6.3 --- Worst expectation and worst risk measures on annuity payments --- p.87 / Chapter 6.3.1 --- The worst mortgage payments --- p.88 / Chapter 6.3.2 --- The worst probability of repayment failure --- p.89 / Chapter 6.3.3 --- The worst expected downside risk of exceeding the threshold --- p.90 / Chapter 6.4 --- Numerical examples for risk management --- p.94 / Chapter 6.4.1 --- A mortgage example --- p.94 / Chapter 6.4.2 --- An annuity example --- p.97 / Chapter 6.5 --- Conclusion --- p.100 / Chapter 7 --- Computing distributional robust probability functions --- p.101 / Chapter 7.1 --- Distributional robust function with a single random variable --- p.105 / Chapter 7.2 --- Moment bound of joint probability --- p.108 / Chapter 7.2.1 --- Constraint (7.5) in LMIs --- p.112 / Chapter 7.2.2 --- Constraint (7.6) in LMIs --- p.112 / Chapter 7.2.3 --- Constraint (7.7) in LMIs --- p.116 / Chapter 7.3 --- Several model extensions --- p.119 / Chapter 7.3.1 --- Moment bound of probability of union events --- p.119 / Chapter 7.3.2 --- The variety of domain of x --- p.120 / Chapter 7.3.3 --- Higher moments incorporated --- p.123 / Chapter 7.4 --- Applications of the moment bound --- p.124 / Chapter 7.4.1 --- The Riemann integrable set approximation --- p.124 / Chapter 7.4.2 --- Worst-case simultaneous VaR --- p.124 / Chapter 7.5 --- Conclusion --- p.126 / Chapter 8 --- Concluding Remarks and Future Directions --- p.127 / Chapter A --- Nonnegative univariate polynomials --- p.129 / Chapter B --- First and second moment of (7.2) --- p.131 / Bibliography --- p.134
86

Distributionally Robust Optimization and its Applications in Machine Learning

Kang, Yang January 2017 (has links)
The goal of Distributionally Robust Optimization (DRO) is to minimize the cost of running a stochastic system, under the assumption that an adversary can replace the underlying baseline stochastic model by another model within a family known as the distributional uncertainty region. This dissertation focuses on a class of DRO problems which are data-driven, which generally speaking means that the baseline stochastic model corresponds to the empirical distribution of a given sample. One of the main contributions of this dissertation is to show that the class of data-driven DRO problems that we study unify many successful machine learning algorithms, including square root Lasso, support vector machines, and generalized logistic regression, among others. A key distinctive feature of the class of DRO problems that we consider here is that our distributional uncertainty region is based on optimal transport costs. In contrast, most of the DRO formulations that exist to date take advantage of a likelihood based formulation (such as Kullback-Leibler divergence, among others). Optimal transport costs include as a special case the so-called Wasserstein distance, which is popular in various statistical applications. The use of optimal transport costs is advantageous relative to the use of divergence-based formulations because the region of distributional uncertainty contains distributions which explore samples outside of the support of the empirical measure, therefore explaining why many machine learning algorithms have the ability to improve generalization. Moreover, the DRO representations that we use to unify the previously mentioned machine learning algorithms, provide a clear interpretation of the so-called regularization parameter, which is known to play a crucial role in controlling generalization error. As we establish, the regularization parameter corresponds exactly to the size of the distributional uncertainty region. Another contribution of this dissertation is the development of statistical methodology to study data-driven DRO formulations based on optimal transport costs. Using this theory, for example, we provide a sharp characterization of the optimal selection of regularization parameters in machine learning settings such as square-root Lasso and regularized logistic regression. Our statistical methodology relies on the construction of a key object which we call the robust Wasserstein profile function (RWP function). The RWP function similar in spirit to the empirical likelihood profile function in the context of empirical likelihood (EL). But the asymptotic analysis of the RWP function is different because of a certain lack of smoothness which arises in a suitable Lagrangian formulation. Optimal transport costs have many advantages in terms of statistical modeling. For example, we show how to define a class of novel semi-supervised learning estimators which are natural companions of the standard supervised counterparts (such as square root Lasso, support vector machines, and logistic regression). We also show how to define the distributional uncertainty region in a purely data-driven way. Precisely, the optimal transport formulation allows us to inform the shape of the distributional uncertainty, not only its center (which given by the empirical distribution). This shape is informed by establishing connections to the metric learning literature. We develop a class of metric learning algorithms which are based on robust optimization. We use the robust-optimization-based metric learning algorithms to inform the distributional uncertainty region in our data-driven DRO problem. This means that we endow the adversary with additional which force him to spend effort on regions of importance to further improve generalization properties of machine learning algorithms. In summary, we explain how the use of optimal transport costs allow constructing what we call double-robust statistical procedures. We test all of the procedures proposed in this paper in various data sets, showing significant improvement in generalization ability over a wide range of state-of-the-art procedures. Finally, we also discuss a class of stochastic optimization algorithms of independent interest which are particularly useful to solve DRO problems, especially those which arise when the distributional uncertainty region is based on optimal transport costs.
87

Robust header compression in 4G networks

Santos, António Pedro Freitas Fortuna dos January 2007 (has links)
Tese de mestrado. Redes e Serviços de Comunicação. Faculdade de Engenharia. Universidade do Porto. 2007
88

Robust Methodology in Evaluating and Optimizing the Performance of Decision Making Units: Empirical Financial Evidence

Gharoie Ahangar, Reza 08 1900 (has links)
Intelligent algorithm approaches that augment the analytical capabilities of traditional techniques may improve the evaluation and performance of decision making units (DMUs). Crises such as the massive COVID-19 pandemic-related shock to businesses have prompted the deployment of analytical tools to provide solutions to emerging complex questions with incredible speed and accuracy. Performance evaluation of DMUs (e.g., financial institutions) is challenging and often depends on the sophistication and robustness of analytical methods. Therefore, advances in analytical methods capable of accurate solutions for competitive real-world applications are essential to managers. This dissertation introduces and reviews three robust methods for evaluating and optimizing the decision-making processes of DMUs to assist managers in enhancing the productivity and performance of their operational goals. The first essay proposes a robust search field division method, which improves the performance of evolutionary algorithms. The second essay proposes a robust double judgment approach method that enhances the efficiency of the data envelopment analysis method. The third essay proposes a robust general regression neural network method to examine the effect of shocks on GDP loss caused by COVID-19 on the global economy. These three essays contribute to optimization methodology by introducing novel robust techniques for managers of DMUs to improve the evaluation and performance of their units as well as by providing guidelines in selecting appropriate models to improve solutions to real-world optimization problems.
89

Robust Control Solution of a Wind Turbine

Zamacona M., Carlos, Vanegas A., Fernando January 2008 (has links)
<p>Power generation using wind turbines is a highly researched control field.</p><p>Many control designs have been proposed based on continuous-time models</p><p>like PI-control, or state observers with state feedback but without special</p><p>regard to robustness to model uncertainties. The aim of this thesis was to</p><p>design a robust digital controller for a wind turbine.</p><p>The design was based on a discrete-time model in the polynomial framework</p><p>that was derived from a continuous-time state-space model based on</p><p>data from a real plant. A digital controller was then designed by interactive</p><p>pole placement to satisfy bounds on sensitivity functions.</p><p>As a result the controller eliminates steady state errors after a step</p><p>response, gives sufficient damping by using dynamical feedback, tolerates</p><p>changes in the dynamics to account for non linear effects, and avoids feedback</p><p>of high frequency un modeled dynamics.</p>
90

Non-Iterative, Feature-Preserving Mesh Smoothing

Jones, Thouis R., Durand, Frédo, Desbrun, Mathieu 01 1900 (has links)
With the increasing use of geometry scanners to create 3D models, there is a rising need for fast and robust mesh smoothing to remove inevitable noise in the measurements. While most previous work has favored diffusion-based iterative techniques for feature-preserving smoothing, we propose a radically different approach, based on robust statistics and local first-order predictors of the surface. The robustness of our local estimates allows us to derive a non-iterative feature-preserving filtering technique applicable to arbitrary "triangle soups". We demonstrate its simplicity of implementation and its efficiency, which make it an excellent solution for smoothing large, noisy, and non-manifold meshes. / Singapore-MIT Alliance (SMA)

Page generated in 0.0399 seconds