• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 348
  • 78
  • 60
  • 56
  • 49
  • 42
  • 16
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 839
  • 112
  • 111
  • 89
  • 79
  • 74
  • 66
  • 64
  • 62
  • 56
  • 55
  • 54
  • 53
  • 52
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

An Efficient Hierarchical Optical Path Network Design Algorithm based on a Traffic Demand Expression in a Cartesian Product Space

Yagyu, Isao, Hasegawa, Hiroshi, Sato, Ken-ichi 08 1900 (has links)
No description available.
252

Hierarchical optical path network design algorithm that can best utilize WSS/WBSS based cross-connects

Hai-Chau, Le, Hasegawa, Hiroshi, Sato, Kenichi 15 September 2009 (has links)
No description available.
253

An Improved Algorithm for the Net Assignment Problem

HIRATA, Tomio, ONO, Takao 01 May 2001 (has links)
No description available.
254

Heterogeneity-awareness in multithreaded multicore processors

Acosta Ojeda, Carmelo Alexis 07 July 2009 (has links)
During the last decades, Computer Architecture has experienced a great series of revolutionary changes. The increasing transistor count on a single chip has led to some of the main milestones in the field, from the release of the first Superscalar (1965) to the state-of-the-art Multithreaded Multicore Architectures, like the Intel Core i7 (2009).Moore's Law has continued for almost half of a century and is not expected to stop for at least another decade, and perhaps much longer. Moore observed a trend in the process technology advances. So, the number of transistors that can be placed inexpensively on an integrated circuit has increased exponentially, doubling approximately every two years. Nevertheless, having more available transistors can not be always directly translated into having more performance.The complexity of state-of-the-art software has reached heights unthinkable in prior ages, both in terms of the amount of computation and the complexity involved. If we deeply analyze this complexity in software we would realize that software is comprised of smaller execution processes that, although maintaining certain spatial/temporal locality, imply an inherently heterogeneous behavior. That is, during execution time the hardware executes very different portions of software, with huge differences in terms of behavior and hardware requirements. This heterogeneity in the behaviour of the software is not specific of the latest videogame, but it is inherent to software programming itself, since the very beginning of Algorithmics.In this PhD dissertation we deeply analyze the inherent heterogeneity present in software behavior. We identify the main issues and sources of this heterogeneity, that hamper most of the state-of-the-art processor designs from obtaining their maximum potential. Hence, the heterogeneity in software turns most of the current processors, commonly called general-purpose processors, into overdesigned. That is, they have much more hardware resources than really needed to execute the software running on them. This fact would not represent a main problem if we were not concerned on the additional power consumption involved in software computation.The final goal of this PhD dissertation consists in assigning each portion of software exactly the amount of hardware resources really needed to fully exploit its maximal potential; without consuming more energy than the strictly needed. That is, obtaining complexity-effective executions using the inherent heterogeneity in software behavior as steering indicator. Thus, we start deeply analyzing the heterogenous behaviour of the software run on top of general-purpose processors and then matching it on top of a heterogeneously distributed hardware, which explicitly exploit heterogeneous hardware requirements. Only by being heterogeneity-aware in software, and appropriately matching this software heterogeneity on top of hardware heterogeneity, may we effectively obtain better processor designs.The PhD dissertation is comprised of four main contributions that cover both multithreaded single-core (hdSMT) and multicore (TCA Algorithm, hTCA Framework and MFLUSH) scenarios, deeply explained in their corresponding chapters in the PhD dissertation memory. Overall, these contributions cover a significant range of the Heterogeneity-Aware Processors' design space. Within this design space, we have focused on the state-of-the-art trend in processor design: Multithreaded Multicore (CMP+SMT) Processors.We make special emphasis on the MPsim simulation tool, specifically designed and developed for this PhD dissertation. This tool has already gone beyond this PhD dissertation, becoming a reference tool by an important group of researchers spread over the Computer Architecture Department (DAC) at the Polytechnic University of Catalonia (UPC), the Barcelona Supercomputing Center (BSC) and the University of Las Palmas de Gran Canaria (ULPGC).
255

Models and Algorithms for Location-Routing and Related Problems

Albareda Sambola, Maria 02 June 2003 (has links)
The most common decisions to be taken in the design of logistic systems are related to the location of facilities and the management of vehicle fleets.In this thesis, we study three of the optimization problems arising around this kind of decisions; namely the LRP, the SGAP and the SLRP. The first problem analyzed in this work is a capacitated LRP with one single uncapacitated vehicle at each open plant. To model this problem we resort to an auxiliary network that allows us to represent feasible solutions as families of paths satisfying a series of side constraints.The solutions of a reinforced LP relaxation of this model are used as the basis of a rounding heuristic designed to build feasible solutions of the problem. Those solutions are then improved with a TS heuristic.Two lower bounds, distinct from that obtained with the LP relaxation of the model, are proposed for this problem. The first one is obtained by bound ing separately the two different parts of the cost of any feasible solution, namely the fixed costs for opening plants and the route costs. The second lower bound is the result of applying CG to the Lagrangian dual obtained by dualizing the assignment constraints. The pricing problem obtained from our formulation is an ESPPRC. The complexity of this problem, and the fact that optimality of the obtained solutions is not always necessary, have motivated us to develope a simple heuristic for it.The computational experiences show a very good behavior of the TS procedure both, for the computational effort required and the quality of the solutions. The first lower bound proposed gives satisfactory results in reasonable amounts of time. In the case of the CG approach, results are very encouraging. In some of the tested instances the program terminated because of the CPU time limit specification, before succeeding to find a valid lower bound.In those instances, the algorithm was always stalled in the exact resolution of an ESPPRC. The difficulties encountered to solve this problem represent a limitation of this approach and suggest the future study of alternative solution methods. In spite of this limitation, in a high proportion of the instances the algorithm succeeded, and the final gap between the upper and the lower bound was always 0. The success in these instances is partially due to the use of our heuristic to generate new columns whenever this is possible.The second problem studied in this thesis is a SGAP. In this assignment problem the jobs are interpreted as customers that can request a service with a given probability, and each agent can serve a limited number of customers. This uncertainty about the presence of each customer is represented by modelling the demands of the customers as Bernoulli distributed independent random variables. The problem consists of finding an a priori assignment of customers to agents. Once the actual requests for service are known, an adaptive action is taken to tackle violations of the capacity constraints. On the one hand, part of the customers assigned to overloaded agents can be reassigned. On the other hand, some of the service requests can be disregarded. Different penalties for reassignment and for unattended service requests are pre-specified. The problem is formulated as a recourse model, where the recourse function gives the expected penalties for reassignments and unattended service requests.Since this recourse function is defined as the expected value of an integer programming recourse model, it does not have the regularity properties characteristic of those defined by linear recourse models. To overcome the difficulties caused by this, we construct a convex approximation of the recourse function that is tight in all feasible points. Moreover, as illustrated in the computational experiences, the use of this approximation reduces the computational effort required to evaluate the recourse function is some orders of magnitude. The convex approximation of the recourse function allows us to adapt the well-known L-shaped method to our problem. Integrality of the first stage variables is tackled in three different ways, giving raise to three versions of the algorithm. The difference among them resides in the hierarchy between the branching and the addition of violated cuts. On the one hand, we present a version where cuts are only added when integer solutions are found. On the other hand, a version is proposed where branching is only performed when no more violated cuts can be identified. The remaining version is designed as a tradeoff of these two; at each node of the search tree, new optimality cuts are added, if needed, and branching is performed if the solution at hand is fractional. Computational experiences point out this last version as the best of the three, since the efforts devoted to obtain a rich approximation of the recourse function and to achieve integrality are more balanced.We have also derived both, lower and upper bounds for this specific SGAP. Upper bounds are obtained from three simple heuristics. All them are based on solving deterministic approximations of the SGAP and provide good quality solutions in small amounts of CPU time. A lower bound is derived from a family of linear stochastic subproblems. Althoug in some of the tested instances the gap between the bounds exceeded the 30%, in the general case we obtained small gaps.One of the heuristics was used in the exact algorithm to provide it with a good upper bound. The lower bound is also used in the three versions of the algorithm, as the basis of some of the optimality cuts and also to identify optimal solutions. The quality of these bounds is one of the factors that explain the success of the exact algorithm.The last problem studied in this thesis is a SLRP. The stochasticity considered here is of the same type as that considered for the SGAP. Again, customers may request a service with a given probability and this is modeled by introducing Bernoulli random variables to represent the demands. A two stage model is proposed for this problem. In a first stage, a set of plants to open has to be chosen together with a family of disjoint routes (one rooted at each open plant) that visit all the customers. In the second stage, once all the demands become available, the actual routes have to be designed. For plants whose number of service requests does not exceed the capacity, the actual route is derived from that designed a priori by skipping customers with no demand. When the requests for service allocated to a plant exceed its capacity, a subset of them is randomly chosen to be served, and they are visited in the order defined by the a priori route.Penalties are paid for the unattended service requests. The expected total cost of the actual routes and the expected penalties for unserviced customers are contained in the recourse function.We present a two phase heuristic to solve this problem. In the first phase, a series of subproblems are sequentially solved to build an initial solution. In the second phase, this solution is successively improved using LS. This improving phase requires a high number of evaluations of the recourse function. Although we have developed an analytical expression for this recourse function, the computational effort required for its evaluation is considerable due to its combinatorial nature. For this reason, we approximate it with a simpler auxiliary function that has allowed us to obtain solutions in small computational times.We also propose a lower bound obtained from bounding different parts of the objective function independently. Unfortunately, we only could find reasonable bounds for the sum of fixed costs for opening the plants plus the expected penalty paid for unserviced customers. Further research is intended to improve the bounding of the expected total cost of the routes.The evaluation of the quality of the solutions obtained with our heuristic is not easy due to the lack of a tight global lower bound. However, the partial bound on the costs relative to the plants allows to conclude that the heuristic makes in general a good choice of the set of plants. As for the allocation of customers to plants and the design of the routes we can only evaluate the evolution along the search. In the computational experiences reported it can be seen that this evolution is satisfactory.
256

On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming

Ding, Yichuan 17 May 2007 (has links)
Two important topics in the study of Quadratically Constrained Quadratic Programming (QCQP) are how to exactly solve a QCQP with few constraints in polynomial time and how to find an inexpensive and strong relaxation bound for a QCQP with many constraints. In this thesis, we first review some important results on QCQP, like the S-Procedure, and the strength of Lagrangian Relaxation and the semidefinite relaxation. Then we focus on two special classes of QCQP, whose objective and constraint functions take the form trace(X^TQX + 2C^T X) + β, and trace(X^TQX + XPX^T + 2C^T X)+ β respectively, where X is an n by r real matrix. For each class of problems, we proposed different semidefinite relaxation formulations and compared their strength. The theoretical results obtained in this thesis have found interesting applications, e.g., solving the Quadratic Assignment Problem.
257

Efficient Frequency Grouping Algorithms for iDEN

Dandanelle, Alexander January 2003 (has links)
This Master’s Thesis deals with a special problem that may be of importance when planning a frequency hopping mobile communication network. In normal cases the Frequency Assignment Problem is solved, in order to plan the use of frequencies in a network. The special case discussed in this thesis occurs when the network operator requires that the frequencies must be arranged into groups. In this case the Frequency Assignment Problem must be solved with respect to the groups, i.e. a Group assignment Problem. The thesis constitutes the final part of the Master of Science in Communication and Transport Systems Engineering education, at Linköping University, Campus Norrköping. The Group Arrangement Problem was presented by ComOpt, a company that has specialized in solving the Frequency Assignment Problem for network operators. This thesis does not deal with solutions for the Frequency Assignment Problem, with respect to the groups. The main issue in the thesis is to construct a computer based algorithm that solves the Group Arrangement Problem, i.e. creating the groups. The goal is to construct an algorithm that creates groups which imply a better solution for the Frequency Assignment Problem than manually created groups. Two algorithms are presented and tested on two cases. Their respective results for both cases are compared with the results from a manual grouping. The two computer based algorithms creates better groups than the manual grouping strategy, according to an artificial quality measure. As of spring 2003 a variant of one of the presented algorithms was implemented in ComOpt’s product for solving the Frequency Assignment Problem.
258

Investigation of service selection algorithms for grid services

Guha, Tapashree 15 September 2009 (has links)
Grid computing has emerged as a global platform to support organizations for coordinated sharing of distributed data, applications, and processes. Additionally, Grid computing has also leveraged web services to define standard interfaces for Grid services adopting the service-oriented view. Consequently, there have been significant efforts to enable applications capable of tackling computationally intensive problems as services on the Grid. In order to ensure that the available services are assigned to the high volume of incoming requests efficiently, it is important to have a robust service selection algorithm. The selection algorithm should not only increase access to the distributed services, promoting operational flexibility and collaboration, but should also allow service providers to scale efficiently to meet a variety of demands while adhering to certain current Quality of Service (QoS) standards. In this research, two service selection algorithms, namely the Particle Swarm Intelligence based Service Selection Algorithm (PSI Selection Algorithm) based on the Multiple Objective Particle Swarm Optimization algorithm using Crowding Distance technique, and the Constraint Satisfaction based Selection (CSS) algorithm, are proposed. The proposed selection algorithms are designed to achieve the following goals: handling large number of incoming requests simultaneously; achieving high match scores in the case of competitive matching of similar types of incoming requests; assigning each services efficiently to all the incoming requests; providing the service requesters the flexibility to provide multiple service selection criteria based on a QoS metric; selecting the appropriate services for the incoming requests within a reasonable time. Next, the two algorithms are verified by a standard assignment problem algorithm called the Munkres algorithm. The feasibility and the accuracy of the proposed algorithms are then tested using various evaluation methods. These evaluations are based on various real world scenarios to check the accuracy of the algorithm, which is primarily based on how closely the requests are being matched to the available services based on the QoS parameters provided by the requesters.
259

Robust Airline Fleet Assignment

Smith, Barry Craig 23 August 2004 (has links)
Robust Airline Fleet Assignment Barry C. Smith 140 Pages Directed by Dr. Ellis L. Johnson Fleet assignment models are used by many airlines to assign aircraft to flights in a schedule to maximize profit. Major airlines report that the use of fleet assignment models increases annual profits by more than $100 million. The results of fleet assignment models affect subsequent planning, marketing and operational processes within the airline. Anticipating these processes and developing solutions favorable to them can further increase the benefits of fleet assignment models. We propose to produce fleet assignment solutions that increase planning flexibility and reduce cost by imposing station purity, limiting the number of fleet types allowed to serve each airport in the schedule. We demonstrate that imposing station purity on the fleet assignment model can limit aircraft dispersion in the network and make solutions more robust relative to crew planning, maintenance planning and operations. Because station purity can significantly degrade computational efficiency, we develop a solution approach, Station Decomposition, which takes advantage of airline network structure. Station Decomposition uses a column generation approach to solving the fleet assignment problem; we further improve the performance of Station Decomposition by developing a primal-dual method that increases the solution quality and model efficiency. Station Decomposition solutions can be highly fractional; we develop a fix and price heuristic to efficiently find integer solutions to the fleet assignment problem. Airline profitability can be increased if fleet assignment models anticipate the effects of marketing processes such as revenue management. We develop an approach, ODFAM, which incorporates airline revenue management effects into the fleet assignment model. We develop an approach to incorporate station purity and ODFAM using a combination of column and cut generation. This approach can increase airline profit up to $27 million per year.
260

Minimizing Multi-zone Orders in the Correlated Storage Assignment Problem

Garfinkel, Maurice 14 January 2005 (has links)
A fundamental issue in warehouse operations is the storage location of the products it contains. Placing products intelligently within the system can allow for great reductions in order pick costs. This is essential because order picking is a major cost of warehouse operations. For example, a study by Drury conducted in the UK found that 63% of warehouse operating costs are due to order picking. When orders contain a single item, the COI rule of Heskett is an optimal storage policy. This is not true when orders contain multiple line items because no information is used about what products are ordered together. In this situation, products that are frequently ordered together should be stored together. This is the basis of the correlated storage assignment problem. Several previous researchers have considered how to form such clusters of products with an ultimate objective of minimizing travel time. In this dissertation, we focus on the alternate objective of minimizing multi-zone orders. We present a mathematical model and discuss properties of the problem. A Lagrangian relaxation solution approach is discussed. In addition, we both develop and adapt several heuristics from the literature to give upper bounds for the model. A cyclic exchange improvement method is also developed. This exponential size neighborhood can be efficiently searched in polynomial time. Even for poor initial solutions, this method finds solutions which outperform the best approaches from the literature. Different product sizes, stock splitting, and rewarehousing are problem features that our model can handle. The cyclic exchange algorithm is also modified to allow these operating modes. In particular, stock splitting is a difficult issue which most previous research in correlated storage ignores. All of our algorithms are implemented and tested on data from a functioning warehouse. For all data sets, the cyclic exchange algorithm outperforms COI, the standard industry approach, by an average of 15%.

Page generated in 0.0485 seconds