Spelling suggestions: "subject:"convex"" "subject:"konvex""
431 |
Linear Mixed Model Selection via Minimum Approximated Information CriterionAtutey, Olivia Abena 06 August 2020 (has links)
No description available.
|
432 |
Non-convex Stochastic Optimization With Biased Gradient EstimatorsSokolov, Igor 03 1900 (has links)
Non-convex optimization problems appear in various applications of machine learning. Because of their practical importance, these problems gained a lot of attention in recent years, leading to the rapid development of new efficient stochastic gradient-type methods. In the quest to improve the generalization performance of modern deep learning models, practitioners are resorting to using larger and larger datasets in the training process, naturally distributed across a number of edge devices. However, with the increase of trainable data, the computational costs of gradient-type methods increase significantly. In addition, distributed methods almost invariably suffer from the so-called communication bottleneck: the cost of communication of the information necessary for the workers to jointly solve the problem is often very high, and it can be orders of magnitude higher than the cost of computation. This thesis provides a study of first-order stochastic methods addressing these issues. In particular, we structure this study by considering certain classes of methods. That allowed us to understand current theoretical gaps, which we successfully filled by providing new efficient algorithms.
|
433 |
Real-Time Soft Body Physics Engine for Enhanced ConvexPolygon DynamicsVickgren, Martin January 2023 (has links)
This thesis covers the development process of implementation, and evaluation of a softbody physics engine for convex polygon objects. The main feature is implementation of adynamic polygon collider that represents a polygons shape correctly, while still being ableto collide with other objects in the simulation. Objects are able to deform both temporarily and permanently using springs with distance constraints. Pressure simulation is alsoimplemented to simulate inflated polygons. The physics bodies does not feature frictionbetween objects, only friction against a static boundary of the simulation. The engine isthen evaluated in order to determine if it can run in real-time which is one of the goals.When it comes to the simulation, Verlet-integration will be used for updating the positions of particles, and every polygon will be built using these particles, and combinedusing certain constraints to make the particles act as one combined object. The main problem that will be solved is the interpenetration solver, which ensures that polygons do notoverlap, and two formulas will be combined to solve this problem. The collision detectionmethod uses line intersections to determine if objects are overlapping, this method endedup being quite expensive for polygons with a lot of vertices. One optimization techniqueis implemented which is axis-aligned bounding boxes around objects which improvedperformance significantly, which also makes the engine more viable for real-time simulations. The physics engine in this report is deterministic using a fixed time-step, dynamictime-step is not tested. The engine also only supports discrete collision detection.
|
434 |
Estimation and Uncertainty Quantification in Tensor Completion with Side InformationSomnooma Hilda Marie Bernadette Ibriga (11206167) 30 July 2021 (has links)
<div>This work aims to provide solutions to two significant issues in the effective use and practical application of tensor completion as a machine learning method. The first solution addresses the challenge in designing fast and accurate recovery methods in tensor completion in the presence of highly sparse and highly missing data. The second takes on the need for robust uncertainty quantification methods for the recovered tensor.</div><div><br></div><div><b>Covariate-assisted Sparse Tensor Completion</b></div><div><b><br></b></div><div>In the first part of the dissertation, we aim to provably complete a sparse and highly missing tensor in the presence of covariate information along tensor modes. Our motivation originates from online advertising where users click-through-rates (CTR) on ads over various devices form a CTR tensor that can have up to 96% missing entries and has many zeros on non-missing entries. These features makes the standalone tensor completion method unsatisfactory. However, beside the CTR tensor, additional ad features or user characteristics are often available. We propose Covariate-assisted Sparse Tensor Completion (COSTCO) to incorporate covariate information in the recovery of the sparse tensor. The key idea is to jointly extract latent components from both the tensor and the covariate matrix to learn a synthetic representation. Theoretically, we derive the error bound for the recovered tensor components and explicitly quantify the improvements on both the reveal probability condition and the tensor recovery accuracy due to covariates. Finally, we apply COSTCO to an advertisement dataset from a major internet platform consisting of a CTR tensor and ad covariate matrix, leading to 23% accuracy improvement over the baseline methodology. An important by-product of our method is that clustering analysis on ad latent components from COSTCO reveal interesting and new ad clusters, that link different product industries which are not formed in existing clustering methods. Such findings could be directly useful for better ad planning procedures.</div><div><b><br></b></div><div><b>Uncertainty Quantification in Covariate-assisted Tensor Completion</b></div><div><br></div><div>In the second part of the dissertation, we propose a framework for uncertainty quantification for the imputed tensor factors obtained from completing a tensor with covariate information. We characterize the distribution of the non-convex estimator obtained from using the algorithm COSTCO down to fine scales. This distributional theory in turn allows us to construct proven valid and tight confidence intervals for the unseen tensor factors. The proposed inferential procedure enjoys several important features: (1) it is fully adaptive to noise heteroscedasticity, (2) it is data-driven and automatically adapts to unknown noise distributions and (3) in the high missing data regime, the inclusion of side information in the tensor completion model yields tighter confidence intervals compared to those obtained from standalone tensor completion methods.</div><div><br></div>
|
435 |
Energy Production Cost and PAR Minimization in Multi-Source Power NetworksGhebremariam, Samuel 17 May 2012 (has links)
No description available.
|
436 |
Development of ab initio characterization tool for Weyl semimetals and thermodynamic stability of kagome Weyl semimetals.Saini, Himanshu January 2023 (has links)
Topological materials have discovered ultrahigh magnetoresistance, chiral anomalies, the inherent anomalous Hall effect, and unique Fermi arc surface states. Topological materials now include insulators, metals, and semimetals. Weyl semimetals (WSM) are topological materials that show linear dispersion with crossings in their band structure which creates the pair of Weyl nodes of opposite chirality. WSMs have topological Fermi arc surface states connecting opposing chirality Weyl nodes. Spin-orbit coupling can result in the band opening in Dirac nodal rings, and creating the pair of Weyl nodes either by breaking the time-reversal or spatial inversion symmetry (but not both) 1-3. The chirality of a Weyl node is set by the Berry flux through a closed surface in reciprocal space around it. The purpose of this thesis was to characterize and investigate the thermodynamic stability of WSM. To accomplish these goals, quantum mechanical modeling at the level of density functional theory (DFT) was used.
WloopPHI, a Python module, integrates the characterization of WSMs into WIEN2k, a full-potential all-electron density functional theory package. It calculates the chirality of the Weyl node (monopole charge) with an enhanced Wilson loop method and Berry phase approach. First, TaAs, a well-characterized Weyl semimetal, validates the code theoretically. We then used the approach to characterize the newly discovered WSM YRh6Ge4, and we found a set of Weyl points into it.
Further, we study the stability of the kagome-based materials A3Sn2S2, where A is Co, Rh, or Ru, in the context of the ternary phase diagrams and competing binary compounds using DFT. We demonstrated that Co3Sn2S2 and Rh3Sn2S2 are stable compounds by examining the convex hull and ternary phase diagrams. It is feasible to synthesize Co3Sn2S2 by a chemical reaction between SnS, CoSn and Co9S8. Moreover, Rh3Sn2S2 can be produced by SnS, RhSn and Rh3S4. On the other hand, we found that Ru3Sn2S2 is a thermodynamically unstable material with respect to RuS2, Ru3S7 and Ru. Our work provides some insights for confirming materials using the DFT approach.
1. S. M. Young et al. Dirac Semimetal in Three Dimensions. Physical Review Letters108(14) (2012), 140405.
2. J. Liu and D. Vanderbilt. Weyl semimetals from noncentrosymmetric topological insulators. Physical Review B 90(15) (2014), 155316.
3. H. Weng et al. Weyl Semimetal Phase in Noncentrosymmetric Transition-Metal Monophosphides. Physical Review X 5(1) (2015), 011029. / Thesis / Master of Applied Science (MASc)
|
437 |
Asynchronous Algorithms for Large-Scale Optimization : Analysis and ImplementationAytekin, Arda January 2017 (has links)
This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution. The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound. Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay. Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results. / <p>QC 20170317</p>
|
438 |
Two-Dimensional Convex Hull Algorithm Using the Iterative Orthant Scan / Tvådimensionell Convex Hull Algoritm Med Iterativ Orthant ScanFreimanis, Davis, Hongisto, Jonas January 2017 (has links)
Finding the minimal convex polygon on a set of points in space is of great interest in computer graphics and data science. Furthermore, doing this efficiently is financially preferable. This thesis explores how a novel variant of bucketing, called the Iterative orthant scan, can be applied to this problem. An algorithm implementing the Iterative orthant scan was created and implemented in C++ and its real-time and asymptotic performance was compared to the industry standard algorithms along with the traditional textbook convex hull algorithm. The worst-case time complexity was shown to be a logarithmic term worse than the best teoretical algorithm. The real-time performance was 47.3% better than the traditional algorithm and 66.7% worse than the industry standard for a uniform square distribution. The real-time performance was 61.1% better than the traditional algorithm and 73.4% worse than the industry standard for a uniform triangular distribution. For a circular distribution, the algorithm performed 89.6% worse than the traditional algorithm and 98.7% worse than the industry standard algorithm. The asymptotic performance improved for some of the distributions studied. Parallelization of the algorithm led to an average performance improvement of 35.9% across the tested distributions. Although the created algorithm exhibited worse than the industry standard algorithm, the algorithm performed better than the traditional algorithm in most cases and shows promise for parallelization. / Att hitta den minsta konvexa polygonen för en mängds punkter är av intresse i ett flertal områden inom datateknik. Att hitta denna polygon effektivt är av ekonomiskt intresse. Denna rapport utforskar hur metoden Iterative orthant scan kan appliceras på problemet att hitta detta konvexa hölje. En algoritm implementerades i C++ som utnyttjar denna metod och dess prestanda jämfördes i realtid såsom asymptotiskt mot den traditionella och den mest använda algoritmen. Den nya algoritmens asymptotiska värsta fall visades vara ett logaritmiskt gradtal sämre än den bästa teoretiska algoritmen. Realtidsprestandan av den nya algoritmen var 47,3% bättre än den traditionella algoritmen och 66,7% sämre än den mest använda algoritmen för fyrkantsdistribution av punkterna. Realtidsprestandan av den nya algoritmen var 61,1% bättre än den traditionella algoritmen och 73,4% sämre än den mest använda algoritmen för triangulär distribution av punkterna. För cirkulär distribution var vår algoritm 89,6% sämre än den traditionella algoritmen och 98,7% sämre än den vanligaste algoritmen. Parallellisering av vår algoritm ledde i medel till en förbättring på 35,9%. För vissa typer av fördelningar blev denna prestanda bättre. Även då algoritmen hade sämre prestanda än den vanligaste algoritmen, så är det ett lovande steg att prestandan var bättre än den traditionella algoritmen.
|
439 |
Guidance strategies for the boosted landing of reusable launch vehicles / Strategier för motor-reglerad landning av återanvändbara bärraketerCarpentier, Agathe January 2019 (has links)
This document presents the results of the master thesis conducted from April 2019 to October 2019 under the direction of CNES engineer Eric Bourgeois, as part of the KTH Master of Science in Aerospace Engineering curriculum. Within the framework of development studies for the Callisto demonstrator, this master thesis aims at studying and developing possible guidance strategies for the boosted landing. Two main approaches are described in this document : • Adaptive pseudo-spectral interpolation • Convex optimization The satisfying results yielded give strong arguments for choosing the latter as part of the Callisto GNC systems and describe possible implementation strategies as well as complementary analyses that could be conducted. / Denna rapport presenterar resultaten av ett examensarbete som genomfördes från april till oktober 2019 under ledning av CNES-ingenjören Eric Bourgeois, som en del av en masterexamen i flyg- och rymdteknik från KTH, Kungliga tekniska högskolan. Inom ramen för utvecklingsstudier för bärraketen Callisto syftar detta arbete att studera och utveckla möjliga reglerstrategier för Callistos landing som kontrolleras med raketer. Två huvudsakliga metoder beskrivs: • Adaptiv pseudospektral interpolering • Konvex optimering Resultaten ger starka argument för att välja den senare av dessa två metoder för Callistos reglersystem och beskriver möjliga implementeringsstrategier samt vilka kompletterande analyser som bör genomföras
|
440 |
A new scheme for the optimum design of stiffened composite panels with geometric imperfectionsElseifi, Mohamed A. 13 November 1998 (has links)
Thin walled stiffened composite panels, which are among the most utilized structural elements in engineering, possess the unfortunate property of being highly sensitive to geometrical imperfections. Existing analysis codes are able to predict the nonlinear postbuckling behavior of a structure with specified imperfections. However, it is impossible to determine the geometric imperfection profile of a nonexistent composite panel early in the design. This is due to the variety of uncertainties that are involved in the manufacturing of these panels. As a mater of fact, due to the very nature of the manufacturing processes, it is hard to imagine that a given manufacturing process could ever produce two identical panels.
The objective of this study is to introduce a new design methodology in which a manufacturing model and a convex model for uncertainties are used in conjunction with a nonlinear design tool in order to obtain a more realistic, better performing final design. First a finite element code for the nonlinear postbuckling analysis of stiffened panels is introduced. Next, a manufacturing model for the simulation of the autoclave curing of epoxy matrix composites is presented. A convex model for the uncertainties in the imperfections is developed in order to predict the weakest panel profile among a family of panels. Finally, the previously developed tools are linked in a closed loop design scheme aimed at obtaining a final design that incorporates the manufacturing tolerances information through more realistic imperfections. / Ph. D.
|
Page generated in 0.0498 seconds