• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 85
  • 53
  • 49
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • Tagged with
  • 864
  • 322
  • 133
  • 94
  • 90
  • 88
  • 86
  • 79
  • 76
  • 68
  • 68
  • 67
  • 66
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Polar - legendre duality in convex geometry and geometric flows

White, Edward C., Jr. January 2008 (has links)
Thesis (M. S.)--Mathematics, Georgia Institute of Technology, 2009. / Committee Chair: Evans Harrell; Committee Member: Guillermo Goldsztein; Committee Member: Mohammad Ghomi
372

Relaxation methods for network flow problems with convex arc costs

January 1985 (has links)
by Dimitri P. Bertsekas, Patrick A. Hossein, Paul Tseng. / "December 1985." / Bibliography: p. 56-57. / National Science Foundation Grant NSF-ECS-8217668
373

Modely planatek z řídké fotometrie / Asteroid Models from Sparse Photometry

Hanuš, Josef January 2013 (has links)
We investigate the photometric accuracy of the sparse data from astrometric surveys available on AstDyS. We use data from seven surveys with the best accu- racy in combination with relative lightcurves in the lightcurve inversion method to derive ∼300 new asteroid physical models (i.e., convex shapes and rotational states). We introduce several reliability tests that we use on all new asteroid mod- els. We investigate rotational properties of our MBAs sample (∼450 models here or previously derived by the lightcurve inversion), especially the spin vector dis- tribution. It is clear that smaller asteroids (D 30 km) have strongly anisotropic spin vector distribution even when we remove the bias of the lightcurve inversion, the poles are clustered towards ecliptic poles. We explain this anisotropy as a re- sult of non-gravitational torques (YORP effect) acting on these objects, because without accounting these torques, we were not able to create such anisotropic dis- tribution by our model of the spin evolution. We also estimate sizes for 41 and 10 asteroids by scaling their models to fit the adaptive optics profiles and occultation observations, respectively.
374

Discovery of low-dimensional structure in high-dimensional inference problems

Aksoylar, Cem 10 March 2017 (has links)
Many learning and inference problems involve high-dimensional data such as images, video or genomic data, which cannot be processed efficiently using conventional methods due to their dimensionality. However, high-dimensional data often exhibit an inherent low-dimensional structure, for instance they can often be represented sparsely in some basis or domain. The discovery of an underlying low-dimensional structure is important to develop more robust and efficient analysis and processing algorithms. The first part of the dissertation investigates the statistical complexity of sparse recovery problems, including sparse linear and nonlinear regression models, feature selection and graph estimation. We present a framework that unifies sparse recovery problems and construct an analogy to channel coding in classical information theory. We perform an information-theoretic analysis to derive bounds on the number of samples required to reliably recover sparsity patterns independent of any specific recovery algorithm. In particular, we show that sample complexity can be tightly characterized using a mutual information formula similar to channel coding results. Next, we derive major extensions to this framework, including dependent input variables and a lower bound for sequential adaptive recovery schemes, which helps determine whether adaptivity provides performance gains. We compute statistical complexity bounds for various sparse recovery problems, showing our analysis improves upon the existing bounds and leads to intuitive results for new applications. In the second part, we investigate methods for improving the computational complexity of subgraph detection in graph-structured data, where we aim to discover anomalous patterns present in a connected subgraph of a given graph. This problem arises in many applications such as detection of network intrusions, community detection, detection of anomalous events in surveillance videos or disease outbreaks. Since optimization over connected subgraphs is a combinatorial and computationally difficult problem, we propose a convex relaxation that offers a principled approach to incorporating connectivity and conductance constraints on candidate subgraphs. We develop a novel nearly-linear time algorithm to solve the relaxed problem, establish convergence and consistency guarantees and demonstrate its feasibility and performance with experiments on real networks.
375

Operator splitting methods for convex optimization : analysis and implementation

Banjac, Goran January 2018 (has links)
Convex optimization problems are a class of mathematical problems which arise in numerous applications. Although interior-point methods can in principle solve these problems efficiently, they may become intractable for solving large-scale problems or be unsuitable for real-time embedded applications. Iterations of operator splitting methods are relatively simple and computationally inexpensive, which makes them suitable for these applications. However, some of their known limitations are slow asymptotic convergence, sensitivity to ill-conditioning, and inability to detect infeasible problems. The aim of this thesis is to better understand operator splitting methods and to develop reliable software tools for convex optimization. The main analytical tool in our investigation of these methods is their characterization as the fixed-point iteration of a nonexpansive operator. The fixed-point theory of nonexpansive operators has been studied for several decades. By exploiting the properties of such an operator, it is possible to show that the alternating direction method of multipliers (ADMM) can detect infeasible problems. Although ADMM iterates diverge when the problem at hand is unsolvable, the differences between subsequent iterates converge to a constant vector which is also a certificate of primal and/or dual infeasibility. Reliable termination criteria for detecting infeasibility are proposed based on this result. Similar ideas are used to derive necessary and sufficient conditions for linear (geometric) convergence of an operator splitting method and a bound on the achievable convergence rate. The new bound turns out to be tight for the class of averaged operators. Next, the OSQP solver is presented. OSQP is a novel general-purpose solver for quadratic programs (QPs) based on ADMM. The solver is very robust, is able to detect infeasible problems, and has been extensively tested on many problem instances from a wide variety of application areas. Finally, operator splitting methods can also be effective in nonconvex optimization. The developed algorithm significantly outperforms a common approach based on convex relaxation of the original nonconvex problem.
376

Aerodynamics and performance enhancement of a ground-effect diffuser

Ehirim, Obinna Hyacinth January 2018 (has links)
This study involved experimental and equivalent computational investigations into the automobile-type 3―D flow physics of a diffuser bluff body in ground-effect and novel passive flow-control methods applied to the diffuser flow to enhance the diffuser’s aerodynamic performance. The bluff body used in this study is an Ahmed-like body employed in an inverted position with the slanted section together with the addition of side plates along both sides forming the ramped diffuser section. The first part of the study confirmed reported observations from previous studies that the downforce generated by the diffuser in proximity to a ground plane is influenced by the peak suction at the diffuser inlet and subsequent static pressure-recovery towards the diffuser exit. Also, when the bluff body ride height is gradually reduced from high to low, the diffuser flow as indicated by its force curve and surface flow features undergoes four distinct flow regimes (types A to D). The types A and B regimes are reasonably symmetrical, made up of two low-pressure core longitudinal vortices travelling along both sides of the diffuser length and they increase downforce and drag with reducing ride height. However, below the ride heights of the type B regime, types C and D regimes are asymmetrical because of the breakdown of one vortex; consequently a significant loss in downforce and drag occurs. The second part of the study involved the use ― near the diffuser exit ― of a convex bump on the diffuser ramp surface and an inverted wing between the diffuser side plates as passive flow control devices. The modification of the diffuser geometry with these devices employed individually or in combination, induced a second-stage pressure-drop and recovery near the diffuser exit. This behaviour was due to the radial pressure gradient induced on the diffuser flow by the suction surface ii curvature of the passive devices. As a result of this aerodynamic phenomenon, the diffuser generated across the flow regimes additional downforce, and a marginal increase in drag due to the profile drag induced by the devices.
377

Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Torkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs. In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data. Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs. In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary. While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems. As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional. Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
378

Optimal regression design under second-order least squares estimator: theory, algorithm and applications

Yeh, Chi-Kuang 23 July 2018 (has links)
In this thesis, we first review the current development of optimal regression designs under the second-order least squares estimator in the literature. The criteria include A- and D-optimality. We then introduce a new formulation of A-optimality criterion so the result can be extended to c-optimality which has not been studied before. Following Kiefer's equivalence results, we derive the optimality conditions for A-, c- and D-optimal designs under the second-order least squares estimator. In addition, we study the number of support points for various regression models including Peleg models, trigonometric models, regular and fractional polynomial models. A generalized scale invariance property for D-optimal designs is also explored. Furthermore, we discuss one computing algorithm to find optimal designs numerically. Several interesting applications are presented and related MATLAB code are provided in the thesis. / Graduate
379

Deriving an Obstacle-Avoiding Shortest Path in Continuous Space: A Spatial Approach

January 2015 (has links)
abstract: The shortest path between two locations is important for spatial analysis, location modeling, and wayfinding tasks. Depending on permissible movement and availability of data, the shortest path is either derived from a pre-defined transportation network or constructed in continuous space. However, continuous space movement adds substantial complexity to identifying the shortest path as the influence of obstacles has to be considered to avoid errors and biases in a derived path. This obstacle-avoiding shortest path in continuous space has been referred to as Euclidean shortest path (ESP), and attracted the attention of many researchers. It has been proven that constructing a graph is an effective approach to limit infinite search options associated with continuous space, reducing the problem to a finite set of potential paths. To date, various methods have been developed for ESP derivation. However, their computational efficiency is limited due to fundamental limitations in graph construction. In this research, a novel algorithm is developed for efficient identification of a graph guaranteed to contain the ESP. This new approach is referred to as the convexpath algorithm, and exploits spatial knowledge and GIS functionality to efficiently construct a graph. The convexpath algorithm utilizes the notion of a convex hull to simultaneously identify relevant obstacles and construct the graph. Additionally, a spatial filtering technique based on intermediate shortest path is enhances intelligent identification of relevant obstacles. Empirical applications show that the convexpath algorithm is able to construct a graph and derive the ESP with significantly improved efficiency compared to visibility and local visibility graph approaches. Furthermore, to boost the performance of convexpath in big data environments, a parallelization approach is proposed and applied to exploit computationally intensive spatial operations of convexpath. Multicore CPU parallelization demonstrates noticeable efficiency gain over the sequential convexpath. Finally, spatial representation and approximation issues associated with raster-based approximation of the ESP are assessed. This dissertation provides a comprehensive treatment of the ESP, and details an important approach for deriving an optimal ESP in real time. / Dissertation/Thesis / Doctoral Dissertation Geography 2015
380

Pricing Schemes in Electric Energy Markets

January 2016 (has links)
abstract: Two thirds of the U.S. power systems are operated under market structures. A good market design should maximize social welfare and give market participants proper incentives to follow market solutions. Pricing schemes play very important roles in market design. Locational marginal pricing scheme is the core pricing scheme in energy markets. Locational marginal prices are good pricing signals for dispatch marginal costs. However, the locational marginal prices alone are not incentive compatible since energy markets are non-convex markets. Locational marginal prices capture dispatch costs but fail to capture commitment costs such as startup cost, no-load cost, and shutdown cost. As a result, uplift payments are paid to generators in markets in order to provide incentives for generators to follow market solutions. The uplift payments distort pricing signals. In this thesis, pricing schemes in electric energy markets are studied. In the first part, convex hull pricing scheme is studied and the pricing model is extended with network constraints. The subgradient algorithm is applied to solve the pricing model. In the second part, a stochastic dispatchable pricing model is proposed to better address the non-convexity and uncertainty issues in day-ahead energy markets. In the third part, an energy storage arbitrage model with the current locational marginal price scheme is studied. Numerical test cases are studied to show the arguments in this thesis. The overall market and pricing scheme design is a very complex problem. This thesis gives a thorough overview of pricing schemes in day-ahead energy markets and addressed several key issues in the markets. New pricing schemes are proposed to improve market efficiency. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2016

Page generated in 0.0377 seconds