• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
901

Two-stage stochastic linear programming: Stochastic decomposition approaches.

Yakowitz, Diana Schadl. January 1991 (has links)
Stochastic linear programming problems are linear programming problems for which one or more data elements are described by random variables. Two-stage stochastic linear programming problems are problems in which a first stage decision is made before the random variables are observed. A second stage, or recourse decision, which varies with these observations compensates for any deficiencies which result from the earlier decision. Many applications areas including water resources, industrial management, economics and finance lead to two-stage stochastic linear programs with recourse. In this dissertation, two algorithms for solving stochastic linear programming problems with recourse are developed and tested. The first is referred to as Quadratic Stochastic Decomposition (QSD). This algorithm is an enhanced version of the Stochastic Decomposition (SD) algorithm of Higle and Sen (1988). The enhancements were designed to increase the computational efficiency of the SD algorithm by introducing a quadratic proximal term in the master program objective function and altering the manner in which the recourse function approximations are updated. We show that every accumulation point of an easily identifiable subsequence of points generated by the algorithm are optimal solutions to the stochastic program with probability 1. The various combinations of the enhancements are empirically investigated in a computational experiment using operations research problems from the literature. The second algorithm is an SD based algorithm for solving a stochastic linear program in which the recourse problem appears in the constraint set. This algorithm involves the use of an exact penalty function in the master program. We find that under certain conditions every accumulation point of a sequence of points generated by the algorithm is an optimal solution to the recourse constrained stochastic program, with probability 1. This algorithm is tested on several operations research problems.
902

The effects of decentralization and trust on the optimization of information gathering activities for cooperative autonomous vehicles

Ortiz-Pena, Hector J. 01 August 2015 (has links)
<p> One of the main technical challenges facing intelligence analysts today is the efficient utilization of limited resources (both in quantity and capabilities) to maximize the accuracy and timeliness of collected data to address information gaps. The challenges on this problem increase when the communication between the information gathering assets is limited, preventing constant coordination of collection activities and information sharing among the assets. This work addresses the problem of routing cooperative autonomous vehicles (e.g., unmanned vehicles) operating in a dynamic environment to maximize overall information gain. Vehicles (collection assets) are collecting information on multiple objectives, subject to communication network constraints. In addition, this research studies the degradation of solution quality (i.e., information gain) as a centralized system synchronizing information gathering activities for a set of cooperative autonomous vehicles moves to a decentralized framework. </p><p> A mathematical programming model to determine routes in (de)centralized frameworks was developed. This model is based on a representation of potential information gain in discretized maps, the effectiveness of the assets collecting information and an obsolescence rate on the areas visited by the assets. The model assumes that information is only exchanged when assets are part of the same network, allowing a multi-perspective optimization of the information gathering activities in which each asset develops its own decisions based only on its perspective of the environment (i.e., potential information gain). This framework is used to evaluate the degradation of solution quality as a centralized system moves to a decentralized framework. This research extends the concept of &lsquo;&lsquo;price of anarchy&rsquo;&rsquo; (a measure on the inefficiency of a system when individuals (i.e., agents) maximize decisions without coordination) by considering different levels of decentralization. </p><p> Different communication network topologies are considered. Collection assets are part of the same communication network (i.e., a connected component) if: (1) a fully connected network exists between the assets in the connected component, or (2) a path (consisting of one or more communication links) between every asset in the connected component exists. Multiple connected components may exist among the available collection assets supporting a mission. Trust (with a suitable decay factor as a function of time) on the potential location of assets that are not part of a connected component is considered as part of an extension to the optimization model. A solution approach based on multiple aggregation strategies to obtain satisficing solutions that are computational efficient was developed.</p>
903

Decomposition algorithms for stochastic combinatorial optimization: Computational experiments and extensions

Ntaimo, Lewis January 2004 (has links)
Some of the most important and challenging problems in computer science and operations research are stochastic combinatorial optimization (SCO) problems. SCO deals with a class of combinatorial optimization models and algorithms in which some of the data are subject to significant uncertainty and evolve over time, and often discrete decisions need to be made before observing complete future data. Therefore, under such circumstances it becomes necessary to develop models and algorithms in which plans are evaluated against possible future scenarios that represent alternative outcomes of data. Consequently, SCO models are characterized by a large number of scenarios, discrete decision variables and constraints. This dissertation focuses on the development of practical decomposition algorithms for large-scale SCO. Stochastic mixed-integer programming (SMIP), the optimization branch concerned with models containing discrete decision variables and random parameters, provides one way for dealing with such decision-making problems under uncertainty. This dissertation studies decomposition algorithms, models and applications for large-scale two-stage SMIP. The theoretical underpinnings of the method are derived from the disjunctive decomposition (D 2) method. We study this class of methods through applications, computations and extensions. With regard to applications, we first present a stochastic server location problem (SSLP) which arises in a variety of applications. These models give rise to SMIP problems in which all integer variables are binary. We study the performance of the D2 method with these problems. In order to carry out a more comprehensive study of SSLP problems, we also present certain other valid inequalities for SMIP problems. Following our study with SSLP, we also discuss the implementation of the D2 method, and also study its performance on problems in which the second-stage is mixed-integer (binary). The models for which we carry out this experimental study have appeared in the literature as stochastic matching problems, and stochastic strategic supply chain planning problems. Finally, in terms of extensions of the D 2 method, we also present a new procedure in which the first-stage model is allowed to include continuous variables. We conclude this dissertation with several ideas for future research.
904

A neural network approach for the solution of Traveling Salesman and basic vehicle routing problems

Ghamasaee, Rahman, 1953- January 1997 (has links)
Easy to explain and difficult to solve, the traveling salesman problem, TSP, is to find the minimum distance Hamiltonian circuit on a network of n cities. The problem cannot be solved in polynomial time, that is, the maximum number of computational steps needed to find the optimum solution grows with n faster than any power of n. Very good combinatoric solution approaches including heuristics with worst case lower bounds, exist. Neural network approaches for solving TSP have been proposed recently. In the elastic net approach, the algorithm begins from m nodes on a small circle centered on the centroid of the distribution of cities. Each node is represented by the coordinates of the related point in the plane. By successive recalculation of the position of nodes, the ring is gradually deformed, and finally it describes a tour around the cities. In another approach, the self organizing feature map, SOFM, which is based on Kohonen's idea of winner takes all, fewer than m nodes are updated at each iteration. In this dissertation I have integrated these two ideas to design a hybrid method with faster convergence to a good solution. On each iteration of the original elastic net method two nx m matrices of connection weights and inter node-city distances must be calculated. In our hybrid method this has been reduced to the calculation of one row and one column of each matrix, thus, If the computational complexity of the elastic net is O(n x m) then the complexity of the hybrid method is O(n+m). The hybrid method then is used to solve the basic vehicle routing problem, VRP, which is the problem of routing vehicles between customers so that the capacity of each vehicle is not violated. A two phase approach is used. In the first phase clusters of customers that satisfy the capacity constrain are formed by using a SOFM network, then in the second phase the above hybrid algorithm is used to solve the corresponding TSP. Our improved method is much faster than the elastic net method. Statistical comparison of the TSP tours shows no difference between the two methods. Our computational results for VRP indicate that our heuristic outperforms existing methods by producing a shorter total tour length.
905

Irrigation scheduling decision support

Fox, Fred Andrew, 1956- January 1997 (has links)
Irrigation scheduling using the soil water balance approach has been recommended to irrigators for many years. Reasonably good results are normally obtained by researchers using carefully quantified inputs. Irrigators in production agriculture may estimate inputs and then question the validity of the method when the irrigation recommendations conflict with present irrigation schedules. By associating each input with an interval representing possible bias based on the way the input was estimated, and solving the irrigation scheduling model using the intervals as inputs, the output was associated with an interval representing possible bias. This method was also used to evaluate possible bias associated with growing degree day based crop coefficient curves developed from Arizona crop consumptive use measurements. For comparison purposes, roughly estimated inputs based on irrigation system type, soil type, area weather data and available crop coefficient curves were used as default intervals. Improved input intervals consisted of observed irrigation system performance, soil property measurements, local weather data and theoretical improvements in crop coefficient curves. For surface irrigation, field observation of plant stress and soil water content showed the greatest potential to improve irrigation date predictions. For buried drip under a row crop, accuracy of the predicted daily irrigation rate was most improved by a better estimate of irrigation efficacy.
906

Operating policies for manufacturing cells

Iyer, Anand, 1968- January 1996 (has links)
Manufacturing cells consisting of an empowered team of workers and the resources required to produce a family of related products have become popular in recent years. Such cells require significant changes in organizational policies for personnel, wage administration, accounting and scheduling. For example, there are usually fewer workers than machines and as a result cells are staffed by cross-trained workers. However, little is known about operating these cells since much of the research in this area has concentrated on the cell formation problem. This thesis discusses the issue of determining good operating policies for manufacturing cells. Operating policy refers to a protocol for setting lot sizes, transfer batch sizes, cell Work-In-Process limits and machine queue dispatching as well as worker assignment rules. Specific components of operating policies have been examined in isolation previously in different contexts. However, cell performance is determined not only by the individual components of policies but also by the nature of the interactions between them. Thus, it is imperative to study policies in an integrated manner in order to determine how best to utilize the limited resources of the cell. The initial part of the thesis is devoted to discussing a general framework which has been developed to parameterize operating policies. Specific policies can be recovered by assigning values to the parameters of the framework. A few examples illustrate the use of the framework. The remainder of the thesis focuses on the various ways in which the framework representation of policies can be used. This includes the development of a general purpose simulator using the Object-Oriented paradigm and analytical models for some policies. A comparison of various operating strategies using simulation and analytical models is also presented. The thesis concludes with a discussion of the insights gleaned from this work as well as directions for future work.
907

A software laboratory and comparative study of computational methods for Markov decision processes

Choi, Jongsup, 1956- January 1996 (has links)
Dynamic programming (DP) is one of the most important mathematical programming methods. However, a major limitation in the practical application of DP methods to stochastic decision and control problems has been the explosive computational burden. Significant amounts of research have been focused on improving the speed of convergence and allowing for larger state and action spaces. The principal methods and algorithms of DP are surveyed in this dissertation. The rank-one correction method for value iteration (ROC) recently proposed by Bertsekas was designed to increase the speed of convergence. In this dissertation we have extended the ROC method proposed by Bertsekas to problems with multiple policies. This method is particularly well-suited to systems with substochastic matrices, e.g., those arising in shortest path problems. In order to test, verify, and compare different computational methods we developed a FORTRAN software laboratory for Stochastic s (YS)tems (CO)ntrol and (DE)cision algorithms for discrete time, finite Markov decision processes (SYSCODE). This is a user-friendly, interactive software laboratory. SYSCODE provides the user with a choice of 39 combinations of DP algorithms for testing and 1 comparison. SYSCODE has also been endowed with sophisticated capabilities for random problem data generation. We present a comprehensive computational comparison of many of the algorithms provided by SYSCODE using well-known test problems as well as randomly generated problem data.
908

Construction and solution of Markov reward models

Qureshi, Muhammad Akber, 1964- January 1996 (has links)
Stochastic Petri nets (SPNs) and extensions are a popular method for evaluating a wide variety of systems. In most cases, the interesting measures regarding the system's characteristics can be defined at the net level by means of reward variables. Depending on the measures, these net-level reward models are solved either by first generating a state-level reward model or by directly generating paths from the net-level description. In this thesis, we propose algorithms for the generation of state-level reward models as well as for directly obtaining solutions from net-level reward models when the net-level reward models are specified as stochastic activity networks (SANs) with "step-based reward structure." Moreover, we propose algorithms for computing the expected value and the probability distribution function of a reward variable at specified time instants, and for computing the probability distribution function of reward accumulated during a finite interval. The interval may correspond to the mission period in a mission-critical system, the time between scheduled maintenances, or a warranty period; whereas the time instants may be critical instances during these intervals. The proposed algorithms avoid the construction of state-level representations and the memory growth problems experienced when applying previous approaches to large models. Furthermore, we study the effect of workload on the availability and response time of voting algorithms. Voting algorithms are a popular way to provide data consistency in replicated data systems. Many models have been made to study the degree to which replication increases the availability of data, and some have been made to study the cost incurred in maintaining consistency. However, little work has been done to evaluate the time it takes to serve request, accounting for server and network failures, or to determine the effect of workload on these measures. In this thesis, we use stochastic activity networks (SANs) to study the effect of work load on availability and mean response time of two variant models of a replicated file system to maintain data consistency, one using a static voting algorithm, the other using a dynamic voting algorithm.
909

Evaluating bias in models for predicting emergency vehicle busy probabilities

Benitez Auza, Ricardo Ariel, 1964- January 1990 (has links)
In this thesis we discuss three models that are used to estimate vehicle busy probabilities when call service time depends on call location and the serving vehicle. The first model requires an assumption that each vehicle operates independently of the other vehicles. The second model approximately corrects for the independence assumption. The third model also approximately corrects for the independence assumption, however it assumes that all vehicles have an equal busy probability. We evaluate model bias by comparing the estimates from each model with estimates from a simulation model. We use extremely long runs to ensure that the simulation is both accurate and precise. Our results suggest that the model using the independence assumption performs poorly as the system utilization increases. The correction models, however, perform well over a wide range of system sizes and utilizations. (Abstract shortened with permission of author.)
910

Some descriptors of the Markovian arrival process

Narayana, Surya, 1962- January 1991 (has links)
The Markovian Arrival Process (MAP) is a tractable, versatile class of Markov renewal processes which has been extensively used to model arrival (or service) processes in queues. This thesis mainly deals with the first two moment matrices of the counts for the MAP. We derive asymptotic expansions for these two moment matrices and also derive efficient and stable algorithms to compute these matrices numerically. Simpler expressions for some of the classical mathematical descriptors of the superposition of independent MAPs also are derived.

Page generated in 0.1006 seconds