• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 14
  • 10
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

La ville libre de Dantzig: son organisation intérieure et sa situation internationale

Otocki, Vladimir January 1930 (has links)
Doctorat en sciences sociales, politiques et économiques / info:eu-repo/semantics/nonPublished
12

Estimation Statistique En Grande Dimension, Parcimonie et Inégalités D'Oracle

Lounici, Karim 24 November 2009 (has links) (PDF)
Dans cette thèse nous traitons deux sujets. Le premier sujet concerne l'apprentissage statistique en grande dimension, i.e. les problèmes où le nombre de paramètres potentiels est beaucoup plus grand que le nombre de données à disposition. Dans ce contexte, l'hypothèse généralement adoptée est que le nombre de paramètres intervenant effectivement dans le modèle est petit par rapport au nombre total de paramètres potentiels et aussi par rapport au nombre de données. Cette hypothèse est appelée ``\emph{sparsity assumption}''. Nous étudions les propriétés statistiques de deux types de procédures : les procédures basées sur la minimisation du risque empirique muni d'une pénalité $l_{1}$ sur l'ensemble des paramètres potentiels et les procédures à poids exponentiels. Le second sujet que nous abordons concerne l'étude de procédures d'agrégation dans un modèle de densité. Nous établissons des inégalités oracles pour la norme $L^{\pi}$, $1\leqslant \pi \leqslant \infty$. Nous proposons ensuite une application à l'estimation minimax et adaptative en la régularité de la densité.
13

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
14

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
15

Planning for Army Force Generation Using Lot Streaming, and Extensions

Markowski, Adria Elizabeth 06 December 2011 (has links)
As the Army transitions to the Army Force Generation (ARFORGEN) deployment cycle, it must adjust its many operations in support of ARFORGEN. Specifically, the Initial Military Training (IMT) must be able to adjust the scheduling of its classes for newly enlisted service members to finish training such that they fulfill Brigade Combat Team (BCT) requirements within their common due windows. We formulate this problem as a lot streaming problem. Lot streaming splits a batch of jobs into sublots,which are then processed over the machines in an overlapping fashion. To schedule classes for the IMT, there are two stages that must be coordinated: Basic Training (BT) and Advanced Individual Training (AIT). For the Army Force Generation problem, the classes are considered as sublots that are streamed from one stage to the next. For this process, the model formulation must address determination of class sizes and scheduling of soldiers and classes at the two stages such that (1) the start times of the soldiers at Stage 2 are greater than their completion times at Stage 1, and (2) the assignment of requisite number of soldiers is made to each BCT, so as to minimize the total flow time. We propose a decomposition-based approach for the solution of this problem. In an effort to decompose the problem, the original lot streaming problem is reformulated such that the master problem selects an optimal combination of schedules for training classes and assigning soldiers to BCTs. A complete schedule selected in the master problem includes the assignments of soldiers to classes in BT, AIT, and their assignments to the BCTs, so as to minimize the total flow time as well as earliness and tardiness for regular Army units. Earliness and Tardiness are defined as the length of the time a soldier waits before and after the due date, respectively, of the BCT to which he or she is assigned. Our decomposition-based method enables solution of larger problem instances without running out of memory, and it affords CPU time reductions when compared with the CPU times required for these problem instances obtained via direct application of CPLEX 12.1. Our investigation into the structure of the problem has enabled further improvement of the proposed decomposition-based method. This improvement is achieved because of a result, which we show, that the first and second-stage scheduling problems need not be solved as one combined subproblem, but rather, they can be solved sequentially, the first stage problem followed by the second stage problem. The combination of Stage 1 and Stage 2 problems as one subproblem creates several additional enumerations of possible schedules the model must generate. By reducing this number of enumerations, the computational effort involved in solving the model reduces significantly, thereby allowing reductions in CPU time. In the Sequential approach, the completion times of soldiers determined at Stage 1 are passed to Stage 2 as bounds on their completion times at Stage 2. We prove that solving the combined subproblem sequentially as two subproblems is optimal when the first stage has no limit on the batch size and the ready times of the soldiers at Stage 1 are the same. For the Army Force Generation problem, we use unequal ready times, and therefore, solving the scheduling problems for the first two stages as sequential subproblems can lead to suboptimal solutions. Our experimental investigation shows efficacy of solving larger-sized problem instances with this method. We also recommend various potential additions to improve the Sequential approach for application to the overall Army problem. We have also demonstrated the use of our methodology to a real-life problem instance. Our methodology results in schedules for IMT with an estimated 28% reduction in mean flow time for soldiers over what is currently experienced in practice. We apply this Sequential approach to various extensions of the problem on hand that pertain to hybrid flow shop and agile manufacturing environments. Results of our computational investigation show the effectiveness of using the Sequential approach over direct solution by CPLEX from the viewpoint of both optimality gap and the CPU time required. In particular, we consider two different model configurations for a hybrid flow shop and three different model configurations for an agile manufacturing facility. / Ph. D.
16

Decomposition of Variational Inequalities with Applications to Nash-Cournot Models in Time of Use Electricity Markets

Celebi, Emre January 2011 (has links)
This thesis proposes equilibrium models to link the wholesale and retail electricity markets which allow for reconciliation of the differing time scales of responses of producers (e.g., hourly) and consumers (e.g., monthly) to changing prices. Electricity market equilibrium models with time of use (TOU) pricing scheme are formulated as large-scale variational inequality (VI) problems, a unified and concise approach for modeling the equilibrium. The demand response is dynamic in these models through a dependence on the lagged demand. Different market structures are examined within this context. With an illustrative example, the welfare gains/losses are analyzed after an implementation of TOU pricing scheme over the single pricing scheme. An approximation of the welfare change for this analysis is also presented. Moreover, break-up of a large supplier into smaller parts is investigated. For the illustrative examples presented in the dissertation, overall welfare gains for consumers and lower prices closer to the levels of perfect competition can be realized when the retail pricing scheme is changed from single pricing to TOU pricing. These models can be useful policy tools for regulatory bodies i) to forecast future retail prices (TOU or single prices), ii) to examine the market power exerted by suppliers and iii) to measure welfare gains/losses with different retail pricing schemes (e.g., single versus TOU pricing). With the inclusion of linearized DC network constraints into these models, the problem size grows considerably. Dantzig-Wolfe (DW) decomposition algorithm for VI problems is used to alleviate the computational burden and it also facilitates model management and maintenance. Modification of the DW decomposition algorithm and approximation of the DW master problem significantly improve the computational effort required to find the equilibrium. These algorithms are applied to a two-region energy model for Canada and a realistic Ontario electricity test system. In addition to empirical analysis, theoretical results for the convergence properties of the master problem approximation are presented for DW decomposition of VI problems.
17

Programmation linéaire en nombres entiers pour l'ordonnancement cyclique sous contraintes de ressources

Ayala Perez, Maria 15 June 2011 (has links) (PDF)
Un problème d'ordonnancement cyclique consiste à ordonner dans le temps l'exécution répétitive d'un ensemble d'opérations liées par des contraintes de précédence, en utilisant un nombre limité de ressources. Ces problèmes ont des applications immédiates dans les systèmes de production ou en informatique parallèle. Particulièrement, ils permettent de modéliser l'ensemble des contraintes de précédence et de ressource à prendre en compte pour l'ordonnancement d'instructions dans les processeurs de type VLIW (Very Long Instruction Word). Dans ce cas, une opération représente une instance d'une instruction dans un programme. L'ordonnancement d'instructions de boucles internes est connu sous le nom de pipeline logiciel. Le pipeline logiciel désigne une méthode efficace pour l'optimisation de boucles qui permet la réalisation en parallèle des opérations des différentes itérations de la boucle. Dans cette thèse, nous nous intéressons principalement au problème d'ordonnancement périodique qui est un cas particulier de l'ordonnancement cyclique et qui est également la base du pipeline logiciel. Le terme ordonnancement modulo désigne un ordonnancement périodique tel que l'allocation de ressources pour une opération donnée n'est pas modifiée d'une itération sur l'autre. Pour résoudre le problème, nous nous intéressons aux formulations de programmation linéaire en nombres entiers, et notamment à la résolution du problème par des techniques de séparation, évaluation, génération de colonnes, relaxation lagrangienne et des méthodes hybrides. En particulier, nous proposons des nouvelles formulations basées sur des variables binaires représentant l'exécution d'ensembles d'instructions en parallèle. Enfin, les méthodes développées ont été validées sur des jeux d'instances industrielles pour des processeurs de type VLIW. Un problème d'ordonnancement cyclique consiste à ordonner dans le temps l'exécution répétitive d'un ensemble d'opérations liées par des contraintes de précédence, en utilisant un nombre limité de ressources. Ces problèmes ont des applications immédiates dans les systèmes de production ou en informatique parallèle. Particulièrement, ils permettent de modéliser l'ensemble des contraintes de précédence et de ressource à prendre en compte pour l'ordonnancement d'instructions dans les processeurs de type VLIW (Very Long Instruction Word). Dans ce cas, une opération représente une instance d'une instruction dans un programme. L'ordonnancement d'instructions de boucles internes est connu sous le nom de pipeline logiciel. Le pipeline logiciel désigne une méthode efficace pour l'optimisation de boucles qui permet la réalisation en parallèle des opérations des différentes itérations de la boucle. Dans cette thèse, nous nous intéressons principalement au problème d'ordonnancement périodique qui est un cas particulier de l'ordonnancement cyclique et qui est également la base du pipeline logiciel. Le terme ordonnancement modulo désigne un ordonnancement périodique tel que l'allocation de ressources pour une opération donnée n'est pas modifiée d'une itération sur l'autre. Pour résoudre le problème, nous nous intéressons aux formulations de programmation linéaire en nombres entiers, et notamment à la résolution du problème par des techniques de séparation, évaluation, génération de colonnes, relaxation lagrangienne et des méthodes hybrides. En particulier, nous proposons des nouvelles formulations basées sur des variables binaires représentant l'exécution d'ensembles d'instructions en parallèle. Enfin, les méthodes développées ont été validées sur des jeux d'instances industrielles pour des processeurs de type VLIW.
18

Decomposition of Variational Inequalities with Applications to Nash-Cournot Models in Time of Use Electricity Markets

Celebi, Emre January 2011 (has links)
This thesis proposes equilibrium models to link the wholesale and retail electricity markets which allow for reconciliation of the differing time scales of responses of producers (e.g., hourly) and consumers (e.g., monthly) to changing prices. Electricity market equilibrium models with time of use (TOU) pricing scheme are formulated as large-scale variational inequality (VI) problems, a unified and concise approach for modeling the equilibrium. The demand response is dynamic in these models through a dependence on the lagged demand. Different market structures are examined within this context. With an illustrative example, the welfare gains/losses are analyzed after an implementation of TOU pricing scheme over the single pricing scheme. An approximation of the welfare change for this analysis is also presented. Moreover, break-up of a large supplier into smaller parts is investigated. For the illustrative examples presented in the dissertation, overall welfare gains for consumers and lower prices closer to the levels of perfect competition can be realized when the retail pricing scheme is changed from single pricing to TOU pricing. These models can be useful policy tools for regulatory bodies i) to forecast future retail prices (TOU or single prices), ii) to examine the market power exerted by suppliers and iii) to measure welfare gains/losses with different retail pricing schemes (e.g., single versus TOU pricing). With the inclusion of linearized DC network constraints into these models, the problem size grows considerably. Dantzig-Wolfe (DW) decomposition algorithm for VI problems is used to alleviate the computational burden and it also facilitates model management and maintenance. Modification of the DW decomposition algorithm and approximation of the DW master problem significantly improve the computational effort required to find the equilibrium. These algorithms are applied to a two-region energy model for Canada and a realistic Ontario electricity test system. In addition to empirical analysis, theoretical results for the convergence properties of the master problem approximation are presented for DW decomposition of VI problems.
19

Primal dual pursuit: a homotopy based algorithm for the Dantzig selector

Asif, Muhammad Salman 10 July 2008 (has links)
Consider the following system model y = Ax + e, where x is n-dimensional sparse signal, y is the measurement vector in a much lower dimension m, A is the measurement matrix and e is the error in our measurements. The Dantzig selector estimates x by solving the following optimization problem minimize || x ||₁ subject to || A'(Ax - y) ||∞ ≤ ε, (DS). This is a convex program and can be recast into a linear program and solved using any modern optimization method e.g., interior point methods. We propose a fast and efficient scheme for solving the Dantzig Selector (DS), which we call "Primal-Dual pursuit". This algorithm can be thought of as a "primal-dual homotopy" approach to solve the Dantzig selector (DS). It computes the solution to (DS) for a range of successively relaxed problems, by starting with a large artificial ε and moving towards the desired value. Our algorithm iteratively updates the primal and dual supports as ε reduces to the desired value, which gives final solution. The homotopy path solution of (DS) takes with varying ε is piecewise linear. At some critical values of ε in this path, either some new elements enter the support of the signal or some existing elements leave the support. We derive the optimality and feasibility conditions which are used to update the solutions at these critical points. We also present a detailed analysis of primal-dual pursuit for sparse signals in noiseless case. We show that if our signal is S-sparse, then we can find all its S elements in exactly S steps using about "S² log n" random measurements, with very high probability.
20

Statistical Applications of Linear Programming for Feature Selection via Regularization Methods

Yao, Yonggang 01 October 2008 (has links)
No description available.

Page generated in 0.046 seconds