Spelling suggestions: "subject:"pivoting"" "subject:"voting""
11 |
Enablers and Inhibitors of Digital Startup Evolution : A Multiple-Case Study of Swedish Business IncubatorsPage, Andrew January 2020 (has links)
Global advances in digital technology are facilitating a corresponding rise in digital entrepreneurship and its startup manifestation. Many uncertainties exist upon the road to digital startup evolution; a number of which may be successfully navigated with the assistance of business incubators. While these organisations provide valuable guidance and support to the startup community, their efforts are, at least in some part, constrained by the lack of a consistent and coherent roadmap to guide both them and their incubatees. This work proposes a solution to that deficiency by addressing the question - What are the enabling and inhibiting factors in digital startup evolution within an incubator setting? - via a multiple-case study that examined digital startups under the umbrella of three business incubators in the city of Umeå, Sweden. This work builds on the existing literature both through its narrowed focus on incubators as well as through its presentation of the Ideation Dynamics Model as a proposed guide for both incubators and digital startups to follow.
|
12 |
Random Edge is not faster than Random Facet on Linear Programs / Random Edge är inte snabbare än Random Facet på linjära programHedblom, Nicole January 2023 (has links)
A Linear Program is a problem where the goal is to maximize a linear function subject to a set of linear inequalities. Geometrically, this can be rephrased as finding the highest point on a polyhedron. The Simplex method is a commonly used algorithm to solve Linear Programs. It traverses the vertices of the polyhedron, and in each step, it selects one adjacent better vertex and moves there. There can be multiple vertices to choose from, and therefore the Simplex method has different variants deciding how the next vertex is selected. One of the most natural variants is Random Edge, which in each step of the Simplex method uniformly at random selects one of the better adjacent vertices. It is interesting and non-trivial to study the complexity of variants of the Simplex method in the number of variables, d, and inequalities, N. In 2011, Friedmann, Hansen, and Zwick found a class of Linear Programs for which the Random Edge algorithm is subexponential with complexity 2^Ω(N^(1/4)), where d=Θ(N). Previously all known lower bounds were polynomial. We give an improved lower bound of 2^Ω(N^(1/2)), for Random Edge on Linear Programs where d=Θ(N). Another well studied variant of the Simplex method is Random Facet. It is upper bounded by 2^O(N^(1/2)) when d=Θ(N). Thus we prove that Random Edge is not faster than Random Facet on Linear Programs where d=Θ(N). Our construction is very similar to the previous construction of Friedmann, Hansen and Zwick. We construct a Markov Decision Process which behaves like a binary counter with linearly many levels and linearly many nodes on each level. The new idea is a new type of delay gadget which can switch quickly from 0 to 1 in some circumstances, leading to fewer nodes needed on each level of the construction. The key idea is that it is worth taking a large risk of getting a small negative reward if the potential positive reward is large enough in comparison. / Ett linjärt program är ett problem där målet är att maximiera en linjär funktion givet en mängd linjära olikheter. Geometriskt kan detta omformuleras som att hitta den högsta punkten på en polyeder. Simplexmetoden är en algoritm som ofta används för att lösa linjära program. Den besöker hörnen i polyedern, och i varje steg väljer den ett närliggande bättre hörn och flyttar dit. Det kan finnas flera hörn att välja mellan, och därför finns det olika varianter av simplexmetoden som bestämmer hur nästa hörn ska väljas. En av de mest naturliga varianterna är Random Edge, som i varje steg av simplexmetoden, uniformt slumpmässigt väljer ett av de närliggande bättre hörnen. Det är intressant och icke-trivialt att studera komplexiteten av olika varianter av simplexmetoden i antalet variabler, d, och olikheter N. År 2011 hittade Friedmann, Hansen och Zwick en familj av linjära program där Random Edge är subexponentiell med komplexitet 2^Ω(N^(1/4)), där d=Θ(N). Innan dess var alla kända undre gränser polynomiska. Vi ger en förbättrad undre gräns på 2^Ω(N^(1/2)), för Random Edge på linjära program där d=Θ(N). En annan välstuderad variant av simplexmetoden är Random Facet. Dess komplexitet har en övre gräns på 2^O(N^(1/2)) när d=Θ(N). Alltså bevisar vi att Random Edge inte är snabbare än Random Facet på linjära program där d=Θ(N). Vår konstruktion är väldigt lik den tidigare konstruktionen av Friedmann, Hansen och Zwick. Vi konstruerar en Markov-beslutsprocess som beter sig som en binär räknare med linjärt många nivåer och linjärt många noder på varje nivå. Den nya idén är en ny typ av försenings-multinod som kan byta snabbt från 0 till 1 i vissa fall, vilket leder till att det behövs färre noder på varje nivå av konstruktionen. Nyckelidén är att det är värt att ta en stor risk att få en liten negativ poäng om den potentiella positiva poängen är stor nog i jämförelse.
|
13 |
Dense matrix computations : communication cost and numerical stability / Calculs pour les matrices denses : coût de communication et stabilité numériqueKhabou, Amal 11 February 2013 (has links)
Cette thèse traite d’une routine d’algèbre linéaire largement utilisée pour la résolution des systèmes li- néaires, il s’agit de la factorisation LU. Habituellement, pour calculer une telle décomposition, on utilise l’élimination de Gauss avec pivotage partiel (GEPP). La stabilité numérique de l’élimination de Gauss avec pivotage partiel est caractérisée par un facteur de croissance qui est reste assez petit en pratique. Toutefois, la version parallèle de cet algorithme ne permet pas d’atteindre les bornes inférieures qui ca- ractérisent le coût de communication pour un algorithme donné. En effet, la factorisation d’un bloc de colonnes constitue un goulot d’étranglement en termes de communication. Pour remédier à ce problème, Grigori et al [60] ont développé une factorisation LU qui minimise la communication(CALU) au prix de quelques calculs redondants. En théorie la borne supérieure du facteur de croissance de CALU est plus grande que celle de l’élimination de Gauss avec pivotage partiel, cependant CALU est stable en pratique. Pour améliorer la borne supérieure du facteur de croissance, nous étudions une nouvelle stra- tégie de pivotage utilisant la factorisation QR avec forte révélation de rang. Ainsi nous développons un nouvel algorithme pour la factorisation LU par blocs. La borne supérieure du facteur de croissance de cet algorithme est plus petite que celle de l’élimination de Gauss avec pivotage partiel. Cette stratégie de pivotage est ensuite combinée avec le pivotage basé sur un tournoi pour produire une factorisation LU qui minimise la communication et qui est plus stable que CALU. Pour les systèmes hiérarchiques, plusieurs niveaux de parallélisme sont disponibles. Cependant, aucune des méthodes précédemment ci- tées n’exploite pleinement ces ressources. Nous proposons et étudions alors deux algorithmes récursifs qui utilisent les mêmes principes que CALU mais qui sont plus appropriés pour des architectures à plu- sieurs niveaux de parallélisme. Pour analyser d’une façon précise et réaliste / This dissertation focuses on a widely used linear algebra kernel to solve linear systems, that is the LU decomposition. Usually, to perform such a computation one uses the Gaussian elimination with partial pivoting (GEPP). The backward stability of GEPP depends on a quantity which is referred to as the growth factor, it is known that in general GEPP leads to modest element growth in practice. However its parallel version does not attain the communication lower bounds. Indeed the panel factorization rep- resents a bottleneck in terms of communication. To overcome this communication bottleneck, Grigori et al [60] have developed a communication avoiding LU factorization (CALU), which is asymptotically optimal in terms of communication cost at the cost of some redundant computation. In theory, the upper bound of the growth factor is larger than that of Gaussian elimination with partial pivoting, however CALU is stable in practice. To improve the upper bound of the growth factor, we study a new pivoting strategy based on strong rank revealing QR factorization. Thus we develop a new block algorithm for the LU factorization. This algorithm has a smaller growth factor upper bound compared to Gaussian elimination with partial pivoting. The strong rank revealing pivoting is then combined with tournament pivoting strategy to produce a communication avoiding LU factorization that is more stable than CALU. For hierarchical systems, multiple levels of parallelism are available. However, none of the previously cited methods fully exploit these hierarchical systems. We propose and study two recursive algorithms based on the communication avoiding LU algorithm, which are more suitable for architectures with multiple levels of parallelism. For an accurate and realistic cost analysis of these hierarchical algo- rithms, we introduce a hierarchical parallel performance model that takes into account processor and network hierarchies. This analysis enables us to accurately predict the performance of the hierarchical LU factorization on an exascale platform.
|
14 |
Nonnegative matrix and tensor factorizations, least squares problems, and applicationsKim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed.
First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method.
In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations.
Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results.
Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
|
Page generated in 0.0686 seconds