• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2028
  • 247
  • 99
  • 74
  • 49
  • 17
  • 16
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 2935
  • 2935
  • 506
  • 484
  • 483
  • 482
  • 450
  • 401
  • 343
  • 332
  • 218
  • 208
  • 183
  • 177
  • 176
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Effective Design and Operation of Supply Chains for Remnant Inventory Systems

Wang, Zhouyan 02 June 2006 (has links)
This research considers a stochastic supply chain problem that (a) has applications in a number of continuous production industries, and (b) integrates elements of several classical operations research problems, including the cutting stock problem, inventory management, facility location, and distribution. The research also uses techniques such as stochastic programming and Benders' decomposition. We consider an environment in which a company has geographically dispersed distribution points where it can stock standard sizes of a product from its plants. In the most general problem, we are given a set of candidate distribution centers with different fixed costs at the di®erent locations, and we may choose not to operate facilities at one or more of these locations. We assume that the customer demand for smaller sizes comes from other geographically distributed points on a continuing basis and this demand is stochastic in nature and is modeled by a Poisson process. Furthermore, we address a sustainable manufacturing environment where the trim is not considered waste, but rather, gets recycled and thus has an inherent value associated with it. Most importantly, the problem is not a static one where a one-time decision has to be made. Rather, decisions are made on a continuing basis, and decisions made at one point in time have a significant impact on those made at later points. An example of where this problem would arise is a steel or aluminum company that produces product in rolls of standard widths. The decision maker must decide which facilities to open, to find long-run replenishment rates for standard sizes, and to develop long-run policies for cutting these into smaller pieces so as to satisfy customer demand. The cutting stock, facility-location, and transportation problems reside at the heart of the research, and all these are integrated into the framework of a supply chain. We can see that, (1) a decision made at some point in time a®ects the ability to satisfy demand at a later point, and (2) that there might be multiple ways to satisfy demand. The situation is further complicated by the fact that customer demand is stochastic and that this demand could be potentially satisfied by more than one distribution center. Given this background, this research examines broad alternatives for how the company's supply chain should be designed and operated in order to remain competitive with smaller and more nimble companies. The research develops a LP formulation, a mixed-integer programming formulation, and a stochastic programming formulation to model di®erent aspects of the problem. We present new solution methodologies based on Benders' decomposition and the L-shaped method to solve the NP-hard mixed-integer problem and the stochastic problem respectively. Results from duality will be used to develop shadow prices for the units in stock, and these in turn will be used to develop a policy to help make decisions on an ongoing basis. We investigate the theoretical underpinnings of the models, develop new, sophisticated computational methods and interesting properties of its solution, build a simulation model to compare the policies developed with other ones commonly in use, and conduct computational studies to compare the performance of new methods with their corresponding existing methods.
102

A Methodology to Develop a Decision Model Using a Large Categorical Database with Application to Identifying Critical Variables during a Transport-Related Hazardous Materials Release

Clark, Renee M 02 June 2006 (has links)
An important problem in the use of large categorical databases is extracting information to make decisions, including identification of critical variables. Due to the complexity of a dataset containing many records, variables, and categories, a methodology for simplification and measurement of associations is needed to build the decision model. To this end, the proposed methodology uses existing methods for categorical exploratory analysis. Specifically, latent class analysis and loglinear modeling, which together constitute a three-step, non-simultaneous approach, were used to simplify the variables and measure their associations, respectively. This methodology has not been used to extract data-driven decision models from large categorical databases. A case in point is a large categorical database at the DoT for hazardous materials releases during transportation. This dataset is important due to the risk from an unintentional release. However, due to the lack of a data-congruent decision model of a hazmat release, current decision making, including critical variable identification, is limited at the Office of Hazardous Materials within the DoT. This gap in modeling of a release is paralleled by a similar gap in the hazmat transportation literature. The literature has an operations research and quantitative risk assessment focus, in which the models consist of simple risk equations or more complex, theoretical equations. Thus, based on critical opportunities at the DoT and gaps in the literature, the proposed methodology was demonstrated using the hazmat release database. The methodology can be applied to other categorical databases for extracting decision models, such as those at the National Center for Health Statistics. A key goal of the decision model, a Bayesian network, was identification of the most influential variables relative to two consequences or measures of risk in a hazmat release, dollar loss and release quantity. The most influential variables for dollar loss were found to be variables related to container failure, specifically the causing object and item-area of failure on the container. Similarly, for release quantity, the container failure variables were also most influential, specifically the contributing action and failure mode. In addition, potential changes in these variables for reducing consequences were identified.
103

Optimizing the Efficiency of the United States Organ Allocation System through Region Reorganization

Kong, Nan 02 June 2006 (has links)
Allocating organs for transplantation has been controversial in the United States for decades. Two main allocation approaches developed in the past are (1) to allocate organs to patients with higher priority at the same locale; (2) to allocate organs to patients with the greatest medical need regardless of their locations. To balance these two allocation preferences, the U.S. organ transplantation and allocation network has lately implemented a three-tier hierarchical allocation system, dividing the U.S. into 11 regions, composed of 59 Organ Procurement Organizations (OPOs). At present, an procured organ is offered first at the local level, and then regionally and nationally. The purpose of allocating organs at the regional level is to increase the likelihood that a donor-recipient match exists, compared to the former allocation approach, and to increase the quality of the match, compared to the latter approach. However, the question of which regional configuration is the most efficient remains unanswered. This dissertation develops several integer programming models to find the most efficient set of regions. Unlike previous efforts, our model addresses efficient region design for the entire hierarchical system given the existing allocation policy. To measure allocation efficiency, we use the intra-regional transplant cardinality. Two estimates are developed in this dissertation. One is a population-based estimate; the other is an estimate based on the situation where there is only one waiting list nationwide. The latter estimate is a refinement of the former one in that it captures the effect of national-level allocation and heterogeneity of clinical and demographic characteristics among donors and patients. To model national-level allocation, we apply a modeling technique similar to spill-and-recapture in the airline fleet assignment problem. A clinically based simulation model is used in this dissertation to estimate several necessary parameters in the analytic model and to verify the optimal regional configuration obtained from the analytic model. The resulting optimal region design problem is a large-scale set-partitioning problem in which there are too many columns to handle explicitly. Given this challenge, we adapt branch and price in this dissertation. We develop a mixed-integer programming pricing problem that is both theoretically and practically hard to solve. To alleviate this existing computational difficulty, we apply geographic decomposition to solve many smaller-scale pricing problems based on pre-specified subsets of OPOs instead of a big pricing problem. When solving each smaller-scale pricing problem, we also generate multiple ``promising' regions that are not necessarily optimal to the pricing problem. In addition, we attempt to develop more efficient solutions for the pricing problem by studying alternative formulations and developing strong valid inequalities. The computational studies in this dissertation use clinical data and show that (1) regional reorganization is beneficial; (2) our branch-and-price application is effective in solving the optimal region design problem.
104

ASSEMBLY DIFFERENTIATION IN CAD SYSTEMS

Manley, David G. 27 September 2006 (has links)
This work presents a data model for differentiating and sharing assembly design (AsD) information during collaborative design. Joints between parts are an important aspect of assembly models that are often ambiguous when sharing of models takes place. Although various joints may have similar geometries and topologies, their joining methods and process parameters may vary significantly. It is possible to attach notes and annotations to geometric entities within CAD environments in order to distinguish joints; however, such textual information does not readily prepare models for sharing among collaborators or downstream processes such as simulation and analysis. At present, textual information must be examined and interpreted by the human designer and cannot be interpreted or utilized by the computer; thus, making the querying of information potentially cumbersome and time consuming. This work presents an AsD ontology that explicitly represents assembly constraints, including joining constraints, and infers any remaining implicit ones. By relating concepts through ontology technology rather than just defining an arbitrary data structure, assembly and joining concepts can be captured in their entirety or extended as necessary. By using the knowledge captured by the ontology, similar-looking joints can be differentiated and the collaboration and downstream product development processes further automated, as the semantics attached to the assembly model prepares it for use within the Semantic Web.
105

SURROGATE SEARCH: A SIMULATION OPTIMIZATION METHODOLOGY FOR LARGE-SCALE SYSTEMS

Lai, Jyh-Pang 27 September 2006 (has links)
For certain settings in which system performance cannot be evaluated by analytical methods, simulation models are widely utilized. This is especially for complex systems. To try to optimize these models, simulation optimization techniques have been developed. These attempt to identify the system designs and parameters that result in (near) optimal system performance. Although more realistic results can be provided by simulation, the computational time for simulator execution, and consequently, simulation optimization may be very long. Hence, the major challenge in determining improved system designs by incorporating simulation and search methodologies is to develop more efficient simulation optimization heuristics or algorithms. This dissertation develops a new approach, Surrogate Search, to determine near optimal system designs for large-scale simulation problems that contain combinatorial decision variables. First, surrogate objective functions are identified by analyzing simulation results to observe system behavior. Multiple linear regression is utilized to examine simulation results and construct surrogate objective functions. The identified surrogate objective functions, which can be quickly executed, are then utilized as simulator replacements in the search methodologies. For multiple problems containing different settings of the same simulation model, only one surrogate objective function needs to be identified. The development of surrogate objective functions benefits the optimization process by reducing the number of simulation iterations. Surrogate Search approaches are developed for two combinatorial problems, operator assignment and task sequencing, using a large-scale sortation system simulation model. The experimental results demonstrate that Surrogate Search can be applied to such large-scale simulation problems and outperform recognized simulation optimization methodology, Scatter Search (SS). This dissertation provides a systematic methodology to perform simulation optimization for complex operations research problems and contributes to the simulation optimization field.
106

When to Initiate, When to Switch, and How to Sequence HIV Therapies: A Markov Decision Process Approach

Shechter, Steven Michael 27 September 2006 (has links)
HIV and AIDS are major health care problems throughout the world, with 40 million people living with HIV by the end of 2005. In that year alone, 5 million people acquired HIV, and 3 million people died of AIDS. For many patients, advances in therapies over the past ten years have changed HIV from a fatal disease to a chronic, yet manageable condition. The purpose of this dissertation is to address the challenge of effectively managing HIV therapies, with a goal of maximizing a patient's total expected lifetime or quality-adjusted lifetime. Perhaps the most important issue in HIV care is when a patient should initiate therapy. Benefits of delaying therapy include avoiding the negative side effects and toxicities associated with the drugs, delaying selective pressures that induce the development of resistant strains of the virus, and preserving a limited number of treatment options. On the other hand, the risks of delayed therapy include the possibility of irreversible damage to the immune system, development of AIDS-related complications, and death. We develop a Markov decision process (MDP) model that examines this question, and we solve it using clinical data. Because of the development of resistance to administered therapies over time, an extension to the initiation question arises: when should a patient switch therapies? Also, inherent in both the initiation and switching questions is the question of which therapy to use each time. We develop MDP models that consider the switching and sequencing problems, and we discuss the challenges involved in solving these models.
107

On the Variance of Electricity Prices in Deregulated Markets

Ruibal, Claudio 31 January 2007 (has links)
Since 1990 many countries have started a deregulation process in the electricity wholesale market with a view to gaining in efficiency, lowering prices and encouraging investments. In most of the markets these objectives have been accomplished, but at the same time, prices have shown high volatility. This is mainly due to certain unique characteristics of electricity as a commodity: it cannot be easily stored; and the flow across lines is dependent on the laws of physics. Electricity must be delivered on the spot to the load. Electricity price variance has been studied very little. Variance is important for constructing prediction intervals for the price. And it is a key factor in pricing derivatives, which are used for energy risk management purposes. A fundamental bid-based stochastic model is presented to predict electricity hourly prices and average price in a given period. The model captures both the economic and physical aspects of the pricing process, considering two sources of uncertainty: availability of the units and demand. This work is based on three oligopoly models Bertrand, Cournot and Supply Function Equilibrium (SFE) and obtains closed form expressions for expected value and variance of electricity hourly prices and average price. Sensitivity analysis is performed on the number of firms, anticipated peak demand and price elasticity of demand. It turns out that as the number of firms in the market decreases, the expected values increase by a significant amount, especially for the Cournot model. Variances for Cournot model also increase. But the variances for SFE model decrease, taking even smaller values than Bertrand's. Price elasticity of demand severely affects expected values and variances in the Cournot model. So does the firms' anticipated peak demand with respect to full installed capacity in the SFE model. Market design and market rules should take these two parameters into account. Finally, a refinement of the models is used to investigate to what extent prices can be more accurately predicted when temperature forecast is at hand. It has been demonstrated that an accurate temperature forecast can reduce significantly the prediction error of the electricity prices.
108

MULTIVARIATE STATISTICAL PROCESS CONTROL FOR CORRELATION MATRICES

Sindelar, Mark Francis 13 June 2007 (has links)
Measures of dispersion in the form of covariance control charts are the multivariate analog to the univariate R-chart, and are used in conjunction with multivariate location charts such as the Hotelling T2 chart, much as the R-chart is the companion to the univariate X-bar chart. Significantly more research has been directed towards location measures, but three multivariate statistics (|S|, Wi, and G) have been developed to measure dispersion. This research explores the correlation component of the covariance statistics and demonstrates that, in many cases, the contribution of correlation is less significant than originally believed, but also offers suggestions for how to implement a correlation control chart when this is the variable of primary interest. This research mathematically analyzes the potential use of the three covariance statistics (|S|, Wi, and G), modified for the special case of correlation. A simulation study is then performed to characterize the behavior of the two modified statistics that are found to be feasible. Parameters varied include the sample size (n), number of quality characteristics (p), the variance, and the number of correlation matrix entries that are perturbed. The performance and utility of the front-running correlation (modified Wi) statistic is then examined by comparison to similarly classed statistics and by trials with real and simulated data sets, respectively. Recommendations for the development of correlation control charts are presented, an outgrowth of which is the understanding that correlation often does not have a large effect on the dispersion measure in most cases.
109

Modeling Disease Management Decisions for Patients with Pneumonia-related Sepsis

Kreke, Jennifer E. 25 September 2007 (has links)
Sepsis, the tenth-leading cause of death in the United States, accounts for more than $16.7 billion in annual health care spending. A significant factor in these costs are unnecessarily long hospital lengths of stay, which stem from the lack of optimal hospital discharge policies and the inability to assess a patient's true underlying health state effectively. Researchers have explored ways of standardizing hospital discharge policies by comparing various strategies, but have not been able to determine optimal policies due to the large number of treatment options. Furthering the state of research into decisions made in the management of patients with sepsis, this dissertation presents clinically based optimization models of pneumonia-related sepsis that use patient data to model disease progression over time. Formulated using Markov Decision Process (MDP) and Partially Observable Markov Decision Process (POMDP) techniques, these models consider the clinician's decisions of when to test for additional information about the patient's underlying health state and when to discharge the patient from the hospital. This work utilizes data from the Genetic and Inflammatory Markers for Sepsis (GenIMS) study, a large multi-center clinical trial led by the University of Pittsburgh School of Medicine. A key aim of the GenIMS trial is to demonstrate that the levels of certain cytokines are predictors of patient survival. Utilizing these results, the models presented in this dissertation consider the question of when to test for cytokine levels using testing procedures that may be costly and inaccurate. A significant result of this dissertation demonstrates that testing should be performed when a clinician is considering the decision to discharge the patient from the hospital. This study characterizes optimal testing and hospital discharge policies for multiple problem instances. In particular, multi-region control-limit policies are demonstrated for specific patient cohorts defined by age and race. It is shown that these control-limit policies depend on the patient's length of stay in the hospital. The effects of testing cost and accuracy on the optimal testing and discharge policies are also explored. Finally, clinical interpretations of the optimal policies are provided to demonstrate how these models can be used to inform clinical practice.
110

Three Essays on Decision Making under Uncertainty in Electric Power Systems

Wang, Lizhi 25 September 2007 (has links)
This thesis consists of three essays, discussing three different but connected problems on decision making under uncertainty in electric power systems. The first essay uses a system model to examine how various factors affect the market price of electricity, and decomposes the price to quantitatively evaluate the contributions of individual factors as well as their interactions. Sensitivity analysis results from a parametric quadratic program are applied in the computation. The second essay formulates the well studied security constrained economic dispatch (SCED) problem as a Markov decision process model, where the action space is a polyhedron defined by linear generation and transmission constraints. Such a model enables the decision maker to accurately evaluate the impact of a dispatch decision to the entire future operation of the electric power system. The third essay examines the effect of demand and supply side uncertainties on the exercise of market power. Solutions under Bertrand, Cournot, and linear supply function equilibrium (LSFE) models are derived and compared. The three problems studied in the essays are a unique representation of different levels of the decision making process in a sophisticated deregulated electric power system, using techniques from both mathematical programming and probability/statistics.

Page generated in 0.0943 seconds