• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

New Benders' Decomposition Approaches for W-CDMA Telecommunication Network Design

Naoum-Sawaya, Joe January 2007 (has links)
Network planning is an essential phase in successfully operating state-of-the-art telecommunication systems. It helps carriers increase revenues by deploying the right technologies in a cost effective manner. More importantly, through the network planning phase, carriers determine the capital needed to build the network as well as the competitive pricing for the offered services. Through this phase, radio tower locations are selected from a pool of candidate locations so as to maximize the net revenue acquired from servicing a number of subscribers. In the Universal Mobile Telecommunication System (UMTS) which is based on the Wideband Code Division Multiple Access scheme (W-CDMA), the coverage area of each tower, called a cell, is not only affected by the signal's attenuation but is also affected by the assignment of the users to the towers. As the number of users in the system increases, interference levels increase and cell sizes decrease. This complicates the network planning problem since the capacity and coverage problems cannot be solved separately. To identify the optimal base station locations, traffic intensity and potential locations are determined in advance, then locations of base stations are chosen so as to satisfy minimum geographical coverage and minimum quality of service levels imposed by licensing agencies. This is implemented through two types of power control mechanisms. The power based power control mechanism, which is often discussed in literature, controls the power of the transmitted signal so that the power at the receiver exceeds a given threshold. On the other hand, the signal-to-interference ratio (SIR) based power control mechanism controls the power of the transmitted signal so that the ratio of the power of the received signal over the power of the interfering signals exceeds a given threshold. Solving the SIR based UMTS/W-CDMA network planning problem helps network providers in designing efficient and cost effective network infrastructure. In contrast to the power based UMTS/W-CDMA network planning problem, the solution of the SIR based model results in higher profits. In SIR based models, the power of the transmitted signals is decreased which lowers the interference and therefore increases the capacity of the overall network. Even though the SIR based power control mechanism is more efficient than the power based power control mechanism, it has a more complex implementation which has gained less attention in the network planning literature. In this thesis, a non-linear mixed integer problem that models the SIR based power control system is presented. The non-linear constraints are reformulated using linear expressions and the problem is exactly solved using a Benders decomposition approach. To overcome the computational difficulties faced by Benders decomposition, two novel extensions are presented. The first extension uses the analytic center cutting plane method for the Benders master problem, in an attempt to reduce the number of times the integer Benders master problem is solved. Additionally, we describe a heuristic that uses the analytic center properties to find feasible solutions for mixed integer problems. The second extension introduces a combinatorial Benders decomposition algorithm. This algorithm may be used for solving mixed integer problems with binary variables. In contrast to the classical Benders decomposition algorithm where the master problem is a mixed integer problem and the subproblem is a linear problem, this algorithm decomposes the problem into a mixed integer master problem and a mixed integer subproblem. The subproblem is then decomposed using classical Benders decomposition, leading to a nested Benders algorithm. Valid cuts are generated at the classical Benders subproblem and are added to the combinatorial Benders master problem to enhance the performance of the algorithm. It was found that valid cuts generated using the analytic center cutting plane method reduce the number of times the integer Benders master problem is solved and therefore reduce the computational time. It was also found that the combinatorial Benders reduces the complexity of the integer master problem by reducing the number of integer variables in it. The valid cuts generated within the nested Benders algorithm proved to be beneficial in reducing the number of times the combinatorial Benders master problem is solved and in reducing the computational time that the overall algorithm takes. Over 110 instances of the UMTS/W-CDMA network planning problem ranging from 20 demand points and 10 base stations to 140 demand points and 30 base stations are solved to optimality.
2

New Benders' Decomposition Approaches for W-CDMA Telecommunication Network Design

Naoum-Sawaya, Joe January 2007 (has links)
Network planning is an essential phase in successfully operating state-of-the-art telecommunication systems. It helps carriers increase revenues by deploying the right technologies in a cost effective manner. More importantly, through the network planning phase, carriers determine the capital needed to build the network as well as the competitive pricing for the offered services. Through this phase, radio tower locations are selected from a pool of candidate locations so as to maximize the net revenue acquired from servicing a number of subscribers. In the Universal Mobile Telecommunication System (UMTS) which is based on the Wideband Code Division Multiple Access scheme (W-CDMA), the coverage area of each tower, called a cell, is not only affected by the signal's attenuation but is also affected by the assignment of the users to the towers. As the number of users in the system increases, interference levels increase and cell sizes decrease. This complicates the network planning problem since the capacity and coverage problems cannot be solved separately. To identify the optimal base station locations, traffic intensity and potential locations are determined in advance, then locations of base stations are chosen so as to satisfy minimum geographical coverage and minimum quality of service levels imposed by licensing agencies. This is implemented through two types of power control mechanisms. The power based power control mechanism, which is often discussed in literature, controls the power of the transmitted signal so that the power at the receiver exceeds a given threshold. On the other hand, the signal-to-interference ratio (SIR) based power control mechanism controls the power of the transmitted signal so that the ratio of the power of the received signal over the power of the interfering signals exceeds a given threshold. Solving the SIR based UMTS/W-CDMA network planning problem helps network providers in designing efficient and cost effective network infrastructure. In contrast to the power based UMTS/W-CDMA network planning problem, the solution of the SIR based model results in higher profits. In SIR based models, the power of the transmitted signals is decreased which lowers the interference and therefore increases the capacity of the overall network. Even though the SIR based power control mechanism is more efficient than the power based power control mechanism, it has a more complex implementation which has gained less attention in the network planning literature. In this thesis, a non-linear mixed integer problem that models the SIR based power control system is presented. The non-linear constraints are reformulated using linear expressions and the problem is exactly solved using a Benders decomposition approach. To overcome the computational difficulties faced by Benders decomposition, two novel extensions are presented. The first extension uses the analytic center cutting plane method for the Benders master problem, in an attempt to reduce the number of times the integer Benders master problem is solved. Additionally, we describe a heuristic that uses the analytic center properties to find feasible solutions for mixed integer problems. The second extension introduces a combinatorial Benders decomposition algorithm. This algorithm may be used for solving mixed integer problems with binary variables. In contrast to the classical Benders decomposition algorithm where the master problem is a mixed integer problem and the subproblem is a linear problem, this algorithm decomposes the problem into a mixed integer master problem and a mixed integer subproblem. The subproblem is then decomposed using classical Benders decomposition, leading to a nested Benders algorithm. Valid cuts are generated at the classical Benders subproblem and are added to the combinatorial Benders master problem to enhance the performance of the algorithm. It was found that valid cuts generated using the analytic center cutting plane method reduce the number of times the integer Benders master problem is solved and therefore reduce the computational time. It was also found that the combinatorial Benders reduces the complexity of the integer master problem by reducing the number of integer variables in it. The valid cuts generated within the nested Benders algorithm proved to be beneficial in reducing the number of times the combinatorial Benders master problem is solved and in reducing the computational time that the overall algorithm takes. Over 110 instances of the UMTS/W-CDMA network planning problem ranging from 20 demand points and 10 base stations to 140 demand points and 30 base stations are solved to optimality.
3

指數基金追蹤模型的最佳化 / A Tracking Model for Index Fund Portfolio Optimization

白惠琦 Unknown Date (has links)
指數基金係提供投資者追隨市場指數成長的投資工具,且投資者僅需考量市場風險即可,其建構方式有完全複製法、分層法、抽樣法、及最佳化法。本論文使用目標規劃模型建構指數基金,此法可歸類為最佳化法。由於模型中每種股票的投資數量設為整數變數,加上控制股票種類數量的0-1變數,因此所建構的目標規劃模型為混合型整數線性規劃問題。此問題在大尺度模型時往往無法求得其最佳解,我們研究此模型的結構提出一組縮小解集合空間的合理不等式,應用切面法加入必需的不等式後再根據本模型的對偶性質發展出有效率的啟發式演算法,最後將此模型及演算法應用在模擬台灣發行量加權股價指數。 / Index fund is an investment tool which tracks a stock-market index and thus is associated with market risk only. Its attraction to investors is low investment risk and low administrative expenses. Four different approaches to index fund construction can be classified as full replication, stratification, sampling, and optimizing respectively. In this thesis, we construct an index fund via the goal programming model with the optimizing approach. The model can be formulated as a mixed integer linear programming. The exact optimal solution can not be obtained when the model becomes large. We then develop a valid inequality and use this valid inequality to develop a cutting plane method. We also propose an efficient heuristic by adopting the dual property. Finally, an empirical study applying to the Taiwan Stock Exchange Capitalization Weighted Stock Index is given to show the efficiency of the algorithm.
4

Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems

Hellman, Fredrik January 2010 (has links)
<p>This thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to answer these questions is done through a series ofnumerical experiments.</p><p>The traffic system is modeled using the Wardrop’s principle for user behavior, separable cost functions of BPR- and TU71-type. Also elastic demand is considered for some problem instances.</p><p>Two already developed method approaches are considered: implicit programming and a cutting constraint algorithm. For the implicit programming approach, several methods—both local and global—are applied and for the traffic assignment problem an implementation of the disaggregate simplicial decomposition (DSD) method is used. Regarding the first question concerning local and global methods, our results don’t give a clear answer.</p><p>The results from numerical experiments of both approaches on networks of different sizes shows that the implicit programming approach has potential to solve large-scale problems, while the cutting constraint algorithm scales worse with network size.</p><p>Also for the stochastic extension of the network design problem, the numerical experiments indicate that implicit programming is a good approach to the problem.</p><p>Further, a number of theorems providing sufficient conditions for strong regularity of the traffic assignment solution mapping for OD connectors and BPR cost functions are given.</p>
5

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
6

Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems

Hellman, Fredrik January 2010 (has links)
This thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to answer these questions is done through a series ofnumerical experiments. The traffic system is modeled using the Wardrop’s principle for user behavior, separable cost functions of BPR- and TU71-type. Also elastic demand is considered for some problem instances. Two already developed method approaches are considered: implicit programming and a cutting constraint algorithm. For the implicit programming approach, several methods—both local and global—are applied and for the traffic assignment problem an implementation of the disaggregate simplicial decomposition (DSD) method is used. Regarding the first question concerning local and global methods, our results don’t give a clear answer. The results from numerical experiments of both approaches on networks of different sizes shows that the implicit programming approach has potential to solve large-scale problems, while the cutting constraint algorithm scales worse with network size. Also for the stochastic extension of the network design problem, the numerical experiments indicate that implicit programming is a good approach to the problem. Further, a number of theorems providing sufficient conditions for strong regularity of the traffic assignment solution mapping for OD connectors and BPR cost functions are given.
7

Empirical Analysis of Algorithms for Block-Angular Linear Programs

Dang, Jiarui January 2007 (has links)
This thesis aims to study the theoretical complexity and empirical performance of decomposition algorithms. We focus on linear programs with a block-angular structure. Decomposition algorithms used to be the only way to solve large-scale special structured problems, in terms of memory limit and CPU time. However, with the advances in computer technology over the past few decades, many large-scale problems can now be solved simply by using some general purpose LP software, without exploiting the problems' inner structures. A question arises naturally, should we solve a structured problem with decomposition, or directly solve it as a whole? We try to understand how a problem's characteristics influence its computational performance, and compare the relative efficiency of algorithms with and without decomposition. Two comparisons are conducted in our research: first, the Dantzig-Wolfe decomposition method (DW) versus the simplex method (simplex); second, the analytic center cutting plane method (ACCPM) versus the interior point method (IPM). These comparisons fall into the two main solution approaches in linear programming: simplex-based algorithms and IPM-based algorithms. Motivated by our observations of ACCPM and DW decomposition, we devise a hybrid algorithm combining ACCPM and DW, which are the counterparts of IPM and simplex in the decomposition framework, to take the advantages of both: the quick convergence rate of IPM-based methods, as well as the accuracy of simplex-based algorithms. A large set of 316 instances is incorporated in our experiments, so that different dimensioned problems with primal or dual block-angular structures are covered to test our conclusions.
8

New approaches to integer programming

Chandrasekaran, Karthekeyan 28 June 2012 (has links)
Integer Programming (IP) is a powerful and widely-used formulation for combinatorial problems. The study of IP over the past several decades has led to fascinating theoretical developments, and has improved our ability to solve discrete optimization problems arising in practice. This thesis makes progress on algorithmic solutions for IP by building on combinatorial, geometric and Linear Programming (LP) approaches. We use a combinatorial approach to give an approximation algorithm for the feedback vertex set problem (FVS) in a recently developed Implicit Hitting Set framework. Our algorithm is a simple online algorithm which finds a nearly optimal FVS in random graphs. We also propose a planted model for FVS and show that an optimal hitting set for a polynomial number of subsets is sufficient to recover the planted subset. Next, we present an unexplored geometric connection between integer feasibility and the classical notion of discrepancy of matrices. We exploit this connection to show a phase transition from infeasibility to feasibility in random IP instances. A recent algorithm for small discrepancy solutions leads to an efficient algorithm to find an integer point for random IP instances that are feasible with high probability. Finally, we give a provably efficient implementation of a cutting-plane algorithm for perfect matchings. In our algorithm, cuts separating the current optimum are easy to derive while a small LP is solved to identify the cuts that are to be retained for later iterations. Our result gives a rigorous theoretical explanation for the practical efficiency of the cutting plane approach for perfect matching evident from implementations. In summary, this thesis contributes to new models and connections, new algorithms and rigorous analysis of well-known approaches for IP.
9

Product Differentiation and Operations Strategy for Price and Time Sensitive Markets

Jayaswal, Sachin January 2009 (has links)
In this dissertation, we study the interplay between a firm’s operations strategy, with regard to its capacity management, and its marketing decision of product differentiation. For this, we study a market comprising heterogeneous customers who differ in their preferences for time and price. Time sensitive customers are willing to pay a price premium for a shorter delivery time, while price sensitive customers are willing to accept a longer delivery time in return for a lower price. Firms exploit this heterogeneity in customers’ preferences, and offer a menu of products/services that differ only in their guaranteed delivery times and prices. From demand perspective, when customers are allowed to self-select according to their preferences, different products act as substitutes, affecting each other’s demand. Customized product for each segment, on the other hand, results in independent demand for each product. On the supply side, a firm may either share the same processing capacity to serve the two market segments, or may dicate capacity for each segment. Our objective is to understand the interaction between product substitution and the firm’s operations strategy (dedicated versus shared capacity), and how they shape the optimal product differentiation strategy. To address the above issue, we first study this problem for a single monopolist firm, which offers two versions of the same basic product: (i) regular product at a lower price but with a longer delivery time, and (ii) express product at a higher price but with a shorter delivery time. Demand for each product arrives according to a Poisson process with a rate that depends both on its price and delivery time. In addition, if the products are substitutable, each product’s demand is also influenced by the price and delivery time of the other product. Demands within each category are served on a first-come-first-serve basis. However, customers for express product are always given priority over the other category when they are served using shared resources. There is a standard delivery time for the regular product, and the firm’s objective is to appropriately price the two products and select the express delivery time so as to maximize its profit rate. The firm simultaneously needs to decide its installed processing capacity so as to meet its promised delivery times with a high degree of reliability. While the problem in a dedicated capacity setting is solved analytically, the same becomes very challenging in a shared capacity setting, especially in the absence of an analytical characterization of the delivery time distribution of regular customers in a priority queue. We develop a solution algorithm, using matrix geometric method in a cutting plane framework, to solve the problem numerically in a shared capacity setting. Our study shows that in a highly capacitated system, if the firm decides to move from a dedicated to a shared capacity setting, it will need to offer more differentiated products, whether the products are substitutable or not. In contrast, when customers are allowed to self-select, such that independent products become substitutable, a more homogeneous pricing scheme results. However, the effect of substitution on optimal delivery time differentiation depends on the firm’s capacity strategy and cost, as well as market characteristics. The optimal response to any change in capacity cost also depends on the firm’s operations strategy. In a dedicated capacity scenario, the optimal response to an increase in capacity cost is always to offer more homogeneous prices and delivery times. In a shared capacity setting, it is again optimal to quote more homogeneous delivery times, but increase or decrease the price differentiation depending on whether the status-quo capacity cost is high or low, respectively. We demonstrate that the above results are corroborated by real-life practices, and provide a number of managerial implications in terms of dealing with issues like volatile fuel prices. We further extend our study to a competitive setting with two firms, each of which may either share its processing capacities for the two products, or may dedicate capacity for each product. The demand faced by each firm for a given product now also depends on the price and delivery time quoted for the same product by the other firm. We observe that the qualitative results of a monopolistic setting also extend to a competitive setting. Specifically, in a highly capacitated system, the equilibrium prices and delivery times are such that they result in more differentiated products when both the firms use shared capacities as compared to the scenario when both the firms use dedicated capacities. When the competing firms are asymmetric, they exploit their distinctive characteristics to differentiate their products. Further, the effects of these asymmetries also depend on the capacity strategy used by the competing firms. Our numerical results suggest that the firm with expensive capacity always offers more homogeneous delivery times. However, its decision on how to differentiate its prices depends on the capacity setting of the two firms as well as the actual level of their capacity costs. On the other hand, the firm with a larger market base always offers more differentiated prices as well as delivery times, irrespective of the capacity setting of the competing firms.
10

Product Differentiation and Operations Strategy for Price and Time Sensitive Markets

Jayaswal, Sachin January 2009 (has links)
In this dissertation, we study the interplay between a firm’s operations strategy, with regard to its capacity management, and its marketing decision of product differentiation. For this, we study a market comprising heterogeneous customers who differ in their preferences for time and price. Time sensitive customers are willing to pay a price premium for a shorter delivery time, while price sensitive customers are willing to accept a longer delivery time in return for a lower price. Firms exploit this heterogeneity in customers’ preferences, and offer a menu of products/services that differ only in their guaranteed delivery times and prices. From demand perspective, when customers are allowed to self-select according to their preferences, different products act as substitutes, affecting each other’s demand. Customized product for each segment, on the other hand, results in independent demand for each product. On the supply side, a firm may either share the same processing capacity to serve the two market segments, or may dicate capacity for each segment. Our objective is to understand the interaction between product substitution and the firm’s operations strategy (dedicated versus shared capacity), and how they shape the optimal product differentiation strategy. To address the above issue, we first study this problem for a single monopolist firm, which offers two versions of the same basic product: (i) regular product at a lower price but with a longer delivery time, and (ii) express product at a higher price but with a shorter delivery time. Demand for each product arrives according to a Poisson process with a rate that depends both on its price and delivery time. In addition, if the products are substitutable, each product’s demand is also influenced by the price and delivery time of the other product. Demands within each category are served on a first-come-first-serve basis. However, customers for express product are always given priority over the other category when they are served using shared resources. There is a standard delivery time for the regular product, and the firm’s objective is to appropriately price the two products and select the express delivery time so as to maximize its profit rate. The firm simultaneously needs to decide its installed processing capacity so as to meet its promised delivery times with a high degree of reliability. While the problem in a dedicated capacity setting is solved analytically, the same becomes very challenging in a shared capacity setting, especially in the absence of an analytical characterization of the delivery time distribution of regular customers in a priority queue. We develop a solution algorithm, using matrix geometric method in a cutting plane framework, to solve the problem numerically in a shared capacity setting. Our study shows that in a highly capacitated system, if the firm decides to move from a dedicated to a shared capacity setting, it will need to offer more differentiated products, whether the products are substitutable or not. In contrast, when customers are allowed to self-select, such that independent products become substitutable, a more homogeneous pricing scheme results. However, the effect of substitution on optimal delivery time differentiation depends on the firm’s capacity strategy and cost, as well as market characteristics. The optimal response to any change in capacity cost also depends on the firm’s operations strategy. In a dedicated capacity scenario, the optimal response to an increase in capacity cost is always to offer more homogeneous prices and delivery times. In a shared capacity setting, it is again optimal to quote more homogeneous delivery times, but increase or decrease the price differentiation depending on whether the status-quo capacity cost is high or low, respectively. We demonstrate that the above results are corroborated by real-life practices, and provide a number of managerial implications in terms of dealing with issues like volatile fuel prices. We further extend our study to a competitive setting with two firms, each of which may either share its processing capacities for the two products, or may dedicate capacity for each product. The demand faced by each firm for a given product now also depends on the price and delivery time quoted for the same product by the other firm. We observe that the qualitative results of a monopolistic setting also extend to a competitive setting. Specifically, in a highly capacitated system, the equilibrium prices and delivery times are such that they result in more differentiated products when both the firms use shared capacities as compared to the scenario when both the firms use dedicated capacities. When the competing firms are asymmetric, they exploit their distinctive characteristics to differentiate their products. Further, the effects of these asymmetries also depend on the capacity strategy used by the competing firms. Our numerical results suggest that the firm with expensive capacity always offers more homogeneous delivery times. However, its decision on how to differentiate its prices depends on the capacity setting of the two firms as well as the actual level of their capacity costs. On the other hand, the firm with a larger market base always offers more differentiated prices as well as delivery times, irrespective of the capacity setting of the competing firms.

Page generated in 0.1146 seconds