• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Note on Nadir Values in Bicriteria Programming Problems

Dür, Mirjam January 2000 (has links) (PDF)
In multiple criteria programming, a decision maker has to choose a point from the set of efficient solutions. This is usually done by some interactive procedure, where he or she moves from one efficient point to the next until an acceptable solution has been reached. It is therefore important to provide some information about the "size" of the efficient set, i.e. to know the minimum (and maximum) criterion values over the efficient set. This is a difficult problem in general. In this paper, we show that for the bicriteria problem, the problem is easy. This does not only hold for the linear bicriteria problem, but also for more general problems. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
2

Probabilistic bicriteria models : sampling methodologies and solution strategies

Rengarajan, Tara 14 December 2010 (has links)
Many complex systems involve simultaneous optimization of two or more criteria, with uncertainty of system parameters being a key driver in decision making. In this thesis, we consider probabilistic bicriteria models in which we seek to operate a system reliably, keeping operating costs low at the same time. High reliability translates into low risk of uncertain events that can adversely impact the system. In bicriteria decision making, a good solution must, at the very least, have the property that the criteria cannot both be improved relative to it. The problem of identifying a broad spectrum of such solutions can be highly involved with no analytical or robust numerical techniques readily available, particularly when the system involves nontrivial stochastics. This thesis serves as a step in the direction of addressing this issue. We show how to construct approximate solutions using Monte Carlo sampling, that are sufficiently close to optimal, easily calculable and subject to a low margin of error. Our approximations can be used in bicriteria decision making across several domains that involve significant risk such as finance, logistics and revenue management. As a first approach, we place a premium on a low risk threshold, and examine the effects of a sampling technique that guarantees a prespecified upper bound on risk. Our model incorporates a novel construct in the form of an uncertain disrupting event whose time and magnitude of occurrence are both random. We show that stratifying the sample observations in an optimal way can yield savings of a high order. We also demonstrate the existence of generalized stratification techniques which enjoy this property, and which can be used without full distributional knowledge of the parameters that govern the time of disruption. Our work thus provides a computationally tractable approach for solving a wide range of bicriteria models via sampling with a probabilistic guarantee on risk. Improved proximity to the efficient frontier is illustrated in the context of a perishable inventory problem. In contrast to this approach, we next aim to solve a bicriteria facility sizing model, in which risk is the probability the system fails to jointly satisfy a vector-valued random demand. Here, instead of seeking a probabilistic guarantee on risk, we instead seek to approximate well the efficient frontier for a range of risk levels of interest. Replacing the risk measure with an empirical measure induced by a random sample, we proceed to solve a family of parametric chance-constrained and cost-constrained models. These two sampling-based approximations differ substantially in terms of what is known regarding their asymptotic behavior, their computational tractability, and even their feasibility as compared to the underlying "true" family of models. We establish however, that in the bicriteria setting we have the freedom to employ either the chance-constrained or cost-constrained family of models, improving our ability to characterize the quality of the efficient frontiers arising from these sampling-based approximations, and improving our ability to solve the approximating model itself. Our computational results reinforce the need for such flexibility, and enable us to understand the behavior of confidence bounds for the efficient frontier. As a final step, we further study the efficient frontier in the cost versus risk tradeoff for the facility sizing model in the special case in which the (cumulative) distribution function of the underlying demand vector is concave in a region defined by a highly-reliable system. In this case, the "true" efficient frontier is convex. We show that the convex hull of the efficient frontier of a sampling-based approximation: (i) can be computed in strongly polynomial time by relying on a reformulation as a max-flow problem via the well-studied selection problem; and, (ii) converges uniformly to the true efficient frontier, when the latter is convex. We conclude with numerical studies that demonstrate the aforementioned properties. / text
3

Optimization of multimedia flows over data networks : the core location problem and the peakedness characterization/Optimisation des flux multimédias sur les réseaux de données : le problème de sélection du noeud central et la caractérisation par peakedness

Macq, Jean-François 19 May 2005 (has links)
In the first part of the thesis, we address the optimization of multimedia applications such as videoconferences or multi-player games in which user-dependent information has to be sent from the users to a core node to be chosen, and then global information has to be multicast back from the core node to all users. For a given communication network, this optimization seeks a core node under two potentially competing criteria, one being the sum of the distances to a set of user terminals, the other being the cost of connecting this core node and the terminals with a multicast (or Steiner) tree. We first consider the problem of minimizing a weighted sum of the two criteria and propose a heuristic which rapidly computes a solution guaranteed to be within a few percent of the optimum. Then we characterize the worst-case trade-offs between approximation ratios for the two criteria. To state our result informally, we show that there always exists a core location for which each criterion is close to its minimum value (if we were to disregard the other criterion). In the second part, we focus on the protection of multimedia streaming applications against packet losses. Because of real-time constraints, error recovery is often achieved by Forward Error Correction (FEC) techniques which consist of partitioning the packet stream into blocks of consecutive packets and adding redundant data packets to each block. If the number of packets lost within a block is at most the number of redundant packets added, the receiver is able to recover the original data packets. Otherwise some data is irrecoverably lost. In communication networks, FEC techniques are typically impaired by the fact that packet losses are not evenly distributed among blocks but rather occur in long bursts of consecutive losses. However it has been observed that splitting the transmission of a FEC block onto several paths typically decreases the probability of an irrecoverable loss. Whereas current approaches rely on an exact computation of the probability and are consequently restricted to very small network instances, we propose to approximate this probability by measuring the impact of the chosen routing on the peakedness of the received packet stream. The peakedness of a stream may be seen as a measure of how packets are spread over time within the stream. Numerical experiments are presented and show that our method yields good approximations of the probability of irrecoverable loss./La première partie de cette thèse concerne l'optimisation d'applications multimédias, telles que des vidéoconférences ou des jeux en groupes, pour lesquels l'information propre à chaque utilisateur doit être envoyée vers un noeud central à sélectionner. L'information globale est ensuite diffusée en retour de ce noeud central vers chacun des utilisateurs. Pour un réseau de communication donné, cette optimisation consiste à choisir le noeud central selon deux critères potentiellement concurrents, le premier étant la somme des distances vers les utilisateurs, le second étant le coût de connecter ce noeud central et les utilisateurs avec un arbre multicast (ou arbre de Steiner). Nous considérons tout d'abord le problème de la minimisation d'une somme pondérée des deux critères et proposons une heuristique qui calcule rapidement une solution garantie d'être éloignée de l'optimum d'au plus quelques pour cent. Ensuite nous caractérisons les pires cas du compromis existant entre les rapports d'approximation pour les deux critères. De façon informelle, notre résultat peut se formuler comme suit : nous montrons qu'il est toujours possible de sélectionner le noeud central de telle sorte que chaque critère soit proche de sa valeur minimum (obtenue sans considérer l'autre critère). Dans la seconde partie, nous nous concentrons sur la protection des applications de diffusion multimédia (streaming) contre les pertes de paquets. A cause de contraintes temps réel, la récupération des erreurs pour ces applications est typiquement réalisée par des techniques de corrections d'erreurs dites "en avant" (Forward Error Correction ou FEC) qui consistent à partitionner le flux de paquets en blocs de paquets consécutifs et à ajouter des paquets de données redondantes à chacun de ces blocs. Si le nombre de paquets perdus au sein d'un bloc est inférieur ou égal au nombre de paquets ajoutés, le récepteur est capable de récupérer les paquets de données originaux. Dans le cas contraire, des données sont perdues de façon irrécupérable. Dans les réseaux de communication, l'efficacité des techniques FEC est typiquement dégradée par le fait que les pertes de paquets ne sont pas distribuées uniformément parmi les blocs de paquets mais apparaissent plutôt groupées par en rafales de pertes consécutives. Cependant il a été récemment observé que répartir la transmission d'un bloc FEC sur plusieurs chemins permet généralement de diminuer la probabilité de perte irrécupérable. Alors que les approches existantes se basent sur un calcul exact de cette probabilité et se limitent donc à des réseaux de très petite taille, nous proposons d'en calculer une approximation en mesurant l'impact du routage choisi sur la "peakedness" du flux de paquets. La peakedness d'un flux peut être vue comme un mesure de la répartition temporelle des paquets au sein de ce flux. Des résultats numériques sont présentés et montrent que notre méthode permet de calculer une bonne approximation de la probabilité de perte irrécupérable..
4

A Bicriteria Rescheduling Problem On Unrelated Parallel Machines: Network Flow And Enumeration Based Approaches

Ozlen, Melih 01 November 2006 (has links) (PDF)
This study considers bicriteria approaches to the minimum cost network flow problem and a rescheduling problem where those approaches find their applications. For the bicriteria integer minimum cost network flow problem, we generate all efficient solutions in two phases. The first phase generates the extreme supported efficient points that are the extreme points of the objective space of the continuous bicriteria network flow problem. In the second phase, we generate the nonextreme supported and unsupported efficient points by Integer Programming Based approaches. Our rescheduling problem considers parallel unrelated machine environments. The criteria are the total flow time as an efficiency measure and the total reassignment cost as a stability measure. We show that the problems that address linear functions of the two criteria can be represented by bicriteria network flow models. To generate all efficient solutions, we use a Classical Approach that is based on the optimal solutions of the singly constrained network flow problem and provide a Branch and Bound approach that starts with extreme supported efficient set and uses powerful bounds. To find an optimal solution to any nonlinear function of the two criteria, we provide a Branch and Bound approach and an Integer Programming Based approach that eliminates some portions of the efficient set that cannot provide improved solutions. We contribute both to the network flow and scheduling literature by proposing algorithms to the bicriteria network flow models and applying them to a rescheduling problem that is bicriteria in nature. The results of our extensive computations with up to 100 jobs and 12 machines have revealed that, the Branch and Bound algorithm finds the efficient set in less computational effort compared to the classical approach. In minimizing a nonlinear function of the two criteria both IP Based approach and Branch and Bound algorithm perform quite satisfactory.

Page generated in 0.063 seconds