• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 1
  • Tagged with
  • 22
  • 22
  • 7
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Mathematical Modeling and Deconvolution for Molecular Characterization of Tissue Heterogeneity

Chen, Lulu 22 January 2020 (has links)
Tissue heterogeneity, arising from intermingled cellular or tissue subtypes, significantly obscures the analyses of molecular expression data derived from complex tissues. Existing computational methods performing data deconvolution from mixed subtype signals almost exclusively rely on supervising information, requiring subtype-specific markers, the number of subtypes, or subtype compositions in individual samples. We develop a fully unsupervised deconvolution method to dissect complex tissues into molecularly distinctive tissue or cell subtypes directly from mixture expression profiles. We implement an R package, deconvolution by Convex Analysis of Mixtures (debCAM) that can automatically detect tissue or cell-specific markers, determine the number of constituent sub-types, calculate subtype proportions in individual samples, and estimate tissue/cell-specific expression profiles. We demonstrate the performance and biomedical utility of debCAM on gene expression, methylation, and proteomics data. With enhanced data preprocessing and prior knowledge incorporation, debCAM software tool will allow biologists to perform a deep and unbiased characterization of tissue remodeling in many biomedical contexts. Purified expression profiles from physical experiments provide both ground truth and a priori information that can be used to validate unsupervised deconvolution results or improve supervision for various deconvolution methods. Detecting tissue or cell-specific expressed markers from purified expression profiles plays a critical role in molecularly characterizing and determining tissue or cell subtypes. Unfortunately, classic differential analysis assumes a convenient test statistic and associated null distribution that is inconsistent with the definition of markers and thus results in a high false positive rate or lower detection power. We describe a statistically-principled marker detection method, One Versus Everyone Subtype Exclusively-expressed Genes (OVESEG) test, that estimates a mixture null distribution model by applying novel permutation schemes. Validated with realistic synthetic data sets on both type 1 error and detection power, OVESEG-test applied to benchmark gene expression data sets detects many known and de novo subtype-specific expressed markers. Subsequent supervised deconvolution results, obtained using markers detected by the OVESEG-test, showed superior performance when compared with popular peer methods. While the current debCAM approach can dissect mixed signals from multiple samples into the 'averaged' expression profiles of subtypes, many subsequent molecular analyses of complex tissues require sample-specific deconvolution where each sample is a mixture of 'individualized' subtype expression profiles. The between-sample variation embedded in sample-specific subtype signals provides critical information for detecting subtype-specific molecular networks and uncovering hidden crosstalk. However, sample-specific deconvolution is an underdetermined and challenging problem because there are more variables than observations. We propose and develop debCAM2.0 to estimate sample-specific subtype signals by nuclear norm regularization, where the hyperparameter value is determined by random entry exclusion based cross-validation scheme. We also derive an efficient optimization approach based on ADMM to enable debCAM2.0 application in large-scale biological data analyses. Experimental results on realistic simulation data sets show that debCAM2.0 can successfully recover subtype-specific correlation networks that is unobtainable otherwise using existing deconvolution methods. / Doctor of Philosophy / Tissue samples are essentially mixtures of tissue or cellular subtypes where the proportions of individual subtypes vary across different tissue samples. Data deconvolution aims to dissect tissue heterogeneity into biologically important subtypes, their proportions, and their marker genes. The physical solution to mitigate tissue heterogeneity is to isolate pure tissue components prior to molecular profiling. However, these experimental methods are time-consuming, expensive and may alter the expression values during isolation. Existing literature primarily focuses on supervised deconvolution methods which require a priori information. This approach has an inherent problem as it relies on the quality and accuracy of the a priori information. In this dissertation, we propose and develop a fully unsupervised deconvolution method - deconvolution by Convex Analysis of Mixtures (debCAM) that can estimate the mixing proportions and 'averaged' expression profiles of individual subtypes present in heterogeneous tissue samples. Furthermore, we also propose and develop debCAM2.0 that can estimate 'individualized' expression profiles of participating subtypes in complex tissue samples. Subtype-specific expressed markers, or marker genes (MGs), serves as critical a priori information for supervised deconvolution. MGs are exclusively and consistently expressed in a particular tissue or cell subtype while detecting such unique MGs involving many subtypes constitutes a challenging task. We propose and develop a statistically-principled method - One Versus Everyone Subtype Exclusively-expressed Genes (OVESEG-test) for robust detection of MGs from purified profiles of many subtypes.
12

Computational convex analysis : from continuous deformation to finite convex integration

Trienis, Michael Joseph 05 1900 (has links)
After introducing concepts from convex analysis, we study how to continuously transform one convex function into another. A natural choice is the arithmetic average, as it is pointwise continuous; however, this choice fails to average functions with different domains. On the contrary, the proximal average is not only continuous (in the epi-topology) but can actually average functions with disjoint domains. In fact, the proximal average not only inherits strict convexity (like the arithmetic average) but also inherits smoothness and differentiability (unlike the arithmetic average). Then we introduce a computational framework for computer-aided convex analysis. Motivated by the proximal average, we notice that the class of piecewise linear-quadratic (PLQ) functions is closed under (positive) scalar multiplication, addition, Fenchel conjugation, and Moreau envelope. As a result, the PLQ framework gives rise to linear-time and linear-space algorithms for convex PLQ functions. We extend this framework to nonconvex PLQ functions and present an explicit convex hull algorithm. Finally, we discuss a method to find primal-dual symmetric antiderivatives from cyclically monotone operators. As these antiderivatives depend on the minimal and maximal Rockafellar functions [5, Theorem 3.5, Corollary 3.10], it turns out that the minimal and maximal function in [12, p.132,p.136] are indeed the same functions. Algorithms used to compute these antiderivatives can be formulated as shortest path problems.
13

Computational convex analysis : from continuous deformation to finite convex integration

Trienis, Michael Joseph 05 1900 (has links)
After introducing concepts from convex analysis, we study how to continuously transform one convex function into another. A natural choice is the arithmetic average, as it is pointwise continuous; however, this choice fails to average functions with different domains. On the contrary, the proximal average is not only continuous (in the epi-topology) but can actually average functions with disjoint domains. In fact, the proximal average not only inherits strict convexity (like the arithmetic average) but also inherits smoothness and differentiability (unlike the arithmetic average). Then we introduce a computational framework for computer-aided convex analysis. Motivated by the proximal average, we notice that the class of piecewise linear-quadratic (PLQ) functions is closed under (positive) scalar multiplication, addition, Fenchel conjugation, and Moreau envelope. As a result, the PLQ framework gives rise to linear-time and linear-space algorithms for convex PLQ functions. We extend this framework to nonconvex PLQ functions and present an explicit convex hull algorithm. Finally, we discuss a method to find primal-dual symmetric antiderivatives from cyclically monotone operators. As these antiderivatives depend on the minimal and maximal Rockafellar functions [5, Theorem 3.5, Corollary 3.10], it turns out that the minimal and maximal function in [12, p.132,p.136] are indeed the same functions. Algorithms used to compute these antiderivatives can be formulated as shortest path problems.
14

Computational convex analysis : from continuous deformation to finite convex integration

Trienis, Michael Joseph 05 1900 (has links)
After introducing concepts from convex analysis, we study how to continuously transform one convex function into another. A natural choice is the arithmetic average, as it is pointwise continuous; however, this choice fails to average functions with different domains. On the contrary, the proximal average is not only continuous (in the epi-topology) but can actually average functions with disjoint domains. In fact, the proximal average not only inherits strict convexity (like the arithmetic average) but also inherits smoothness and differentiability (unlike the arithmetic average). Then we introduce a computational framework for computer-aided convex analysis. Motivated by the proximal average, we notice that the class of piecewise linear-quadratic (PLQ) functions is closed under (positive) scalar multiplication, addition, Fenchel conjugation, and Moreau envelope. As a result, the PLQ framework gives rise to linear-time and linear-space algorithms for convex PLQ functions. We extend this framework to nonconvex PLQ functions and present an explicit convex hull algorithm. Finally, we discuss a method to find primal-dual symmetric antiderivatives from cyclically monotone operators. As these antiderivatives depend on the minimal and maximal Rockafellar functions [5, Theorem 3.5, Corollary 3.10], it turns out that the minimal and maximal function in [12, p.132,p.136] are indeed the same functions. Algorithms used to compute these antiderivatives can be formulated as shortest path problems. / Graduate Studies, College of (Okanagan) / Graduate
15

Continuum limits of evolution and variational problems on graphs / Limites continues de problèmes d'évolution et variationnels sur graphes

Hafiene, Yosra 05 December 2018 (has links)
L’opérateur du p-Laplacien non local, l’équation d’évolution et la régularisation variationnelle associées régies par un noyau donné ont des applications dans divers domaines de la science et de l’ingénierie. En particulier, ils sont devenus des outils modernes pour le traitement massif des données (y compris les signaux, les images, la géométrie) et dans les tâches d’apprentissage automatique telles que la classification. En pratique, cependant, ces modèles sont implémentés sous forme discrète (en espace et en temps, ou en espace pour la régularisation variationnelle) comme approximation numérique d’un problème continu, où le noyau est remplacé par la matrice d’adjacence d’un graphe. Pourtant, peu de résultats sur la consistence de ces discrétisations sont disponibles. En particulier, il est largement ouvert de déterminer quand les solutions de l’équation d’évolution ou du problème variationnel des tâches basées sur des graphes convergent (dans un sens approprié) à mesure que le nombre de sommets augmente, vers un objet bien défini dans le domaine continu, et si oui, à quelle vitesse. Dans ce manuscrit, nous posons les bases pour aborder ces questions.En combinant des outils de la théorie des graphes, de l’analyse convexe, de la théorie des semi- groupes non linéaires et des équations d’évolution, nous interprétons rigoureusement la limite continue du problème d’évolution et du problème variationnel du p-Laplacien discrets sur graphes. Plus précisé- ment, nous considérons une suite de graphes (déterministes) convergeant vers un objet connu sous le nom de graphon. Si les problèmes d’évolution et variationnel associés au p-Laplacien continu non local sont discrétisés de manière appropriée sur cette suite de graphes, nous montrons que la suite des solutions des problèmes discrets converge vers la solution du problème continu régi par le graphon, lorsque le nombre de sommets tend vers l’infini. Ce faisant, nous fournissons des bornes d’erreur/consistance.Cela permet à son tour d’établir les taux de convergence pour différents modèles de graphes. En parti- culier, nous mettons en exergue le rôle de la géométrie/régularité des graphons. Pour les séquences de graphes aléatoires, en utilisant des inégalités de déviation (concentration), nous fournissons des taux de convergence nonasymptotiques en probabilité et présentons les différents régimes en fonction de p, de la régularité du graphon et des données initiales. / The non-local p-Laplacian operator, the associated evolution equation and variational regularization, governed by a given kernel, have applications in various areas of science and engineering. In particular, they are modern tools for massive data processing (including signals, images, geometry), and machine learning tasks such as classification. In practice, however, these models are implemented in discrete form (in space and time, or in space for variational regularization) as a numerical approximation to a continuous problem, where the kernel is replaced by an adjacency matrix of a graph. Yet, few results on the consistency of these discretization are available. In particular it is largely open to determine when do the solutions of either the evolution equation or the variational problem of graph-based tasks converge (in an appropriate sense), as the number of vertices increases, to a well-defined object in the continuum setting, and if yes, at which rate. In this manuscript, we lay the foundations to address these questions.Combining tools from graph theory, convex analysis, nonlinear semigroup theory and evolution equa- tions, we give a rigorous interpretation to the continuous limit of the discrete nonlocal p-Laplacian evolution and variational problems on graphs. More specifically, we consider a sequence of (determin- istic) graphs converging to a so-called limit object known as the graphon. If the continuous p-Laplacian evolution and variational problems are properly discretized on this graph sequence, we prove that the solutions of the sequence of discrete problems converge to the solution of the continuous problem governed by the graphon, as the number of graph vertices grows to infinity. Along the way, we provide a consistency/error bounds. In turn, this allows to establish the convergence rates for different graph models. In particular, we highlight the role of the graphon geometry/regularity. For random graph se- quences, using sharp deviation inequalities, we deliver nonasymptotic convergence rates in probability and exhibit the different regimes depending on p, the regularity of the graphon and the initial data.
16

Studies of inventory control and capacity planning with multiple sources

Zahrn, Frederick Craig 06 July 2009 (has links)
This dissertation consists of two self-contained studies. The first study, in the domain of stochastic inventory theory, addresses the structure of optimal ordering policies in a periodic review setting. We take multiple sources of a single product to imply an ordering cost function that is nondecreasing, piecewise linear, and convex. Our main contribution is a proof of the optimality of a finite generalized base stock policy under an average cost criterion. Our inventory model is formulated as a Markov decision process with complete observations. Orders are delivered immediately. Excess demand is fully backlogged, and the function describing holding and backlogging costs is convex. All parameters are stationary, and the random demands are independent and identically distributed across periods. The (known) distribution function is subject to mild assumptions along with the holding and backlogging cost function. Our proof uses a vanishing discount approach. We extend our results from a continuous environment to the case where demands and order quantities are integral. The second study is in the area of capacity planning. Our overarching contribution is a relatively simple and fast solution approach for the fleet composition problem faced by a retail distribution firm, focusing on the context of a major beverage distributor. Vehicles to be included in the fleet may be of multiple sizes; we assume that spot transportation capacity will be available to supplement the fleet as needed. We aim to balance the fixed costs of the fleet against exposure to high variable costs due to reliance on spot capacity. We propose a two-stage stochastic linear programming model with fixed recourse. The demand on a particular day in the planning horizon is described by the total quantity to be delivered and the total number of customers to visit. Thus, daily demand throughout the entire planning period is captured by a bivariate probability distribution. We present an algorithm that efficiently generates a "definitive" collection of bases of the recourse program, facilitating rapid computation of the expected cost of a prospective fleet and its gradient. The equivalent convex program may then be solved by a standard gradient projection algorithm.
17

Optimisation à deux niveaux : Résultats d'existence, dualité et conditions d'optimalité / Bilevel optimization : Existence of solutions, duality and optimality conditions

Saissi, Fatima Ezzarha 06 July 2017 (has links)
Depuis son introduction, la programmation mathématique à deux niveaux suscite un intérêt toujours croissant. En effet, vu ses applications dans une multitude de problèmes concrets (problèmes de gestion, planification économique, chimie, sciences environnementales,...), beaucoup de recherches ont été effectuées afin de contribuer à la résolution de cette classe de problèmes. Cette thèse est consacrée à l'étude de quelques classes de problèmes d'optimisation à deux niveaux, à savoir, les problèmes à deux niveaux forts, les problèmes à deux niveaux forts-faibles et les problèmes à deux niveaux semi-vectoriels. Le premier chapitre est consacré aux rappels de quelques définitions et résultats de topologie et d'analyse convexe que nous avons utilisé dans la suite. Dans le deuxième chapitre, nous avons rappelé quelques résultats théoriques et algorithmiques établis dans la littérature pour la résolution de quelques classes de problèmes d'optimisation à deux niveaux. Le troisième chapitre est consacré à l'étude d'un problème à deux niveaux fort-faible (SWBL). Vu la difficulté que présente cette classe de problèmes dans l'étude de l'existence de solutions, et afin de donner de nouvelles perspectives à leur résolution, nous avons procédé à une régularisation du problème. Sous des conditions suffisantes et via cette régularisation, nous avons montré que le problème (SWBL) admet au moins une solution. Dans le quatrième chapitre, nous avons donné une approche de dualité à un problème d'optimisation à deux niveaux fort (S). Cette approche est basée sur l'utilisation d'une régularisation et la dualité de Fenchel-Lagrange. En utilisant cette approche, nous avons donné des conditions nécessaires d'optimalité pour le problème (S). Enfin, des conditions suffisantes d'optimalité sont obtenues pour (S) sans utiliser l'approche. Une application concrète est donnée sur l'allocation de ressources. Dans le cinquième chapitre, nous avons étudié un problème à deux niveaux semi-vectoriel (SVBL). Pour ce problème, nous avons donné une approche de dualité en utilisant une régularisation, une scalarisation et la dualité de Fenchel-Lagrange. Puis, via cette approche et sous des hypothèses appropriées, nous avons donné des conditions nécessaires d'optimalité pour une classe de solutions du problème (SVBL). Finalement, des conditions suffisantes d'optimalité sont établies sont établies sans utiliser l'approche de dualité. / Since its introduction, the class of tao-level programming problems has attracted increasing interest. Indeed, because of its applications in a multitude of concrete problems (management problems, economic planning, chemistry, environmental sciences,...), several researchers have been interested in the study of such class of problems. This thesis deals with the study of some classes of two-level optimization problems, namely, strong two-level problems, strong-weak two-level problems and semi-vectorial two-level problems. In the first chapter, we have recalled some definitions and results related to topology and convex analysis that we have used in our study. In the second chapter, we have discussed some theoretical and algorithmic results established in the literature for solving some classes of two-level optimization problems. The third chapter deals with strong-weak Stackelberg problems. As it is well-known, such a class of problems presents difficulties in its study concerning the existence of solutions. So that, for a strong-weak two-level optimization problem, we have first given a regularization. Then, via this regularization and under appropriate assumptions we have shown the existence of solutions to such a problem. This result generalizes the one given in the literature for weak Stackelberg problems. In the fourth chapter, we have given a duality approach for a strong two-level programming problem (S). The duality approach is based on the use of a regularization and the Fenchel-Lagrange duality. Then, via this approach, we have given necessary optimality conditions for (S). Finally, sufficient optimality conditions are given for the initial problem (S). An application to a two-level resource allocation problem is given. In the fifth chapter, we have considered a semivectorial two-level programming problem (SVBL) where the upper and lower levels are vectorial and scalar respectively. For such a problem, we have given a duality approach based on the use of a regularization, a scalarization and the Fenchel-Lagrange duality. Then, via this approach we have established necessary optimality conditions for (SVBL). Finally, we have given sufficient optimality conditions without using the duality approach.
18

A Classification Tool for Predictive Data Analysis in Healthcare

Victors, Mason Lemoyne 07 March 2013 (has links) (PDF)
Hidden Markov Models (HMMs) have seen widespread use in a variety of applications ranging from speech recognition to gene prediction. While developed over forty years ago, they remain a standard tool for sequential data analysis. More recently, Latent Dirichlet Allocation (LDA) was developed and soon gained widespread popularity as a powerful topic analysis tool for text corpora. We thoroughly develop LDA and a generalization of HMMs and demonstrate the conjunctive use of both methods in predictive data analysis for health care problems. While these two tools (LDA and HMM) have been used in conjunction previously, we use LDA in a new way to reduce the dimensionality involved in the training of HMMs. With both LDA and our extension of HMM, we train classifiers to predict development of Chronic Kidney Disease (CKD) in the near future.
19

Analyse Statique de Programmes Numériques: Ensembles Affines Contraints

Ghorbal, Khalil 28 July 2011 (has links) (PDF)
Nous nous plaçons dans le cadre de l'analyse statique de programmes, et nous nous intéressons aux propriétés numériques, c'est a dire celles qui concernent les valeurs numériques des variables de programmes. Nous essayons en particulier de déterminer une sur-approximation garantie de l'ensemble de valeurs possibles pour chaque variable numérique utilisée dans le programme à analyser. Cette analyse statique est faite dans le cadre de la théorie de l'interprétation abstraite, théorie présentant un compromis entre les limites théoriques d'indécidabilite et de calculabilite et la précision des résultats obtenus. Nous sommes partis des travaux d'Eric Goubault et Sylvie Putot, que nous avons étendus et généralisés. Notre nouveau domaine abstrait, appelé ensembles affines contraints, combine à la fois l'efficacite de calcul des domaines à base de formes affines et le pouvoir ex- pressif des domaines relationnels classiques tels que les octogones ou les polyèdres. Le nouveau domaine a été implémenté pour mettre en évidence l'intérêt de cette combinaison, ses avantages, ses performances et ses limites par rapport aux autres domaines numériques déjà existants. Le formalisme ainsi que les résultats pra- tiques ont fait l'objet de plusieurs publications [CAV 2009, CAV 2010].
20

Optimisation convexe non-différentiable et méthodes de décomposition en recherche opérationnelle / Convex nonsmooth optimization and decomposition methods in operations research

Zaourar, Sofia 04 November 2014 (has links)
Les méthodes de décomposition sont une application du concept de diviser pour régner en optimisation. L'idée est de décomposer un problème d'optimisation donné en une séquence de sous-problèmes plus faciles à résoudre. Bien que ces méthodes soient les meilleures pour un grand nombre de problèmes de recherche opérationnelle, leur application à des problèmes réels de grande taille présente encore de nombreux défis. Cette thèse propose des améliorations méthodologiques et algorithmiques de méthodes de décomposition. Notre approche est basée sur l'analyse convexe et l'optimisation non-différentiable. Dans la décomposition par les contraintes (ou relaxation lagrangienne) du problème de planification de production électrique, même les sous-problèmes sont trop difficiles pour être résolus exactement. Mais des solutions approchées résultent en des prix instables et chahutés. Nous présentons un moyen simple d'améliorer la structure des prix en pénalisant leurs oscillations, en utilisant en particulier une régularisation par variation totale. La consistance de notre approche est illustrée sur des problèmes d'EDF. Nous considérons ensuite la décomposition par les variables (ou de Benders) qui peut avoir une convergence excessivement lente. Avec un point de vue d'optimisation non-différentiable, nous nous concentrons sur l'instabilité de l'algorithme de plans sécants sous-jacent à la méthode. Nous proposons une stabilisation quadratique de l'algorithme de Benders, inspirée par les méthodes de faisceaux en optimisation convexe. L'accélération résultant de cette stabilisation est illustrée sur des problèmes de conception de réseau et de localisation de plates-formes de correspondance (hubs). Nous nous intéressons aussi plus généralement aux problèmes d'optimisation convexe non-différentiable dont l'objectif est coûteux à évaluer. C'est en particulier une situation courante dans les procédures de décomposition. Nous montrons qu'il existe souvent des informations supplémentaires sur le problème, faciles à obtenir mais avec une précision inconnue, qui ne sont pas utilisées dans les algorithmes. Nous proposons un moyen d'incorporer ces informations incontrôlées dans des méthodes classiques d'optimisation convexe non-différentiable. Cette approche est appliquée avec succès à desproblèmes d'optimisation stochastique. Finalement, nous introduisons une stratégie de décomposition pour un problème de réaffectation de machines. Cette décomposition mène à une nouvelle variante de problèmes de conditionnement vectoriel (vectorbin packing) où les boîtes sont de taille variable. Nous proposons des heuristiques efficaces pour ce problème, qui améliorent les résultats de l'état de l'art du conditionnement vectoriel. Une adaptation de ces heuristiques permet de construire des solutions réalisables au problème de réaffectation de machines de Google. / Decomposition methods are an application of the divide and conquer principle to large-scale optimization. Their idea is to decompose a given optimization problem into a sequence of easier subproblems. Although successful for many applications, these methods still present challenges. In this thesis, we propose methodological and algorithmic improvements of decomposition methods and illustrate them on several operations research problems. Our approach heavily relies on convex analysis and nonsmooth optimization. In constraint decomposition (or Lagrangian relaxation) applied to short-term electricity generation management, even the subproblems are too difficult to solve exactly. When solved approximately though, the obtained prices show an unstable noisy behaviour. We present a simple way to improve the structure of the prices by penalizing their noisy behaviour, in particular using a total variation regularization. We illustrate the consistency of our regularization on real-life problems from EDF. We then consider variable decomposition (or Benders decomposition), that can have a very slow convergence. With a nonsmooth optimization point of view on this method, we address the instability of Benders cutting-planes algorithm. We present an algorithmic stabilization inspired by bundle methods for convex optimization. The acceleration provided by this stabilization is illustrated on network design andhub location problems. We also study more general convex nonsmooth problems whose objective function is expensive to evaluate. This situation typically arises in decomposition methods. We show that it often exists extra information about the problem, cheap but with unknown accuracy, that is not used by the algorithms. We propose a way to incorporate this coarseinformation into classical nonsmooth optimization algorithms and apply it successfully to two-stage stochastic problems.Finally, we introduce a decomposition strategy for the machine reassignment problem. This decomposition leads to a new variant of vector bin packing problems, where the bins have variable sizes. We propose fast and efficient heuristics for this problem that improve on state of the art results of vector bin packing problems. An adaptation of these heuristics is also able to generate feasible solutions for Google instances of the machine reassignment problem.

Page generated in 0.0385 seconds