• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Virtual process capability

Mackertich, Neal A 01 January 1998 (has links)
The quality cost of non-conformance associated with first run production builds is typically more than five times that of later production runs. If a manufacturing organization is to gain market share and increase its profitability, it must explore methods of accelerating its learning curves through defect prevention. Current "Transition to Production" concept methodologies attempt with limited success to accelerate organizational learning through Design for Manufacturability (DFM), design phase dimensional management studies, manufacturing floor statistical methods (SPC, DOE, etc.), and various qualitative strategies. While each of these techniques are effective to some degree in reducing future nonconformances, an integrated, design-phase approach utilizing current technology is needed. "Virtual Process Capability" (VPC) is a methodology for integrating statistical process capability knowledge directly into the hardware design phase, resulting in the improved performance and reduced product costs typically associated with mature product manufacturing. The intent behind the methodology is to realistically simulate the manufacture of hardware products by understanding their underlying model equations and the statistical distributions of each involved contributing parameter. Once each product has been simulated and an expected percentage defective has been estimated, mathematical programming and statistical quality engineering techniques are then utilized for improvement purposes. Data taken from the practical application of this methodology at Raytheon Aircraft has conservatively estimated that for each dollar invested ten are saved. As a technical extension to this developed methodology, statistical insights and methods are provided as to how product and process improvement analysis is best accomplished. Included within this area of discussion is the statistical development and validation of improved measures for the more efficient detection of dispersion and mean effects than that of more traditional methods. Additionally, the use of mathematical programming techniques is creatively employed as an improved mechanism in the optimization of nominal-the-best type problems.
562

Finite-memory control of partially observable systems

Hansen, Eric Anton 01 January 1998 (has links)
A partially observable Markov decision process (POMDP) is a model of planning and control that enables reasoning about actions with stochastic effects and observations that provide imperfect information. It has applications in diverse fields that include artificial intelligence, operations research and optimal control, although computational difficulties have limited its use. This thesis presents new dynamic-programming and heuristic-search algorithms for solving infinite-horizon POMDPs. These algorithms represent a plan or policy as a finite-state controller and exploit this representation to improve the efficiency of problem solving. One contribution of this thesis is an improved policy-iteration algorithm that searches in a policy space of finite-state controllers. It is based on a new interpretation of the dynamic-programming operator for POMDPs as the transformation of a finite-state controller into an improved finite-state controller. Empirically, it outperforms value iteration in solving infinite-horizon POMDPs. Dynamic-programming algorithms such as policy iteration and value iteration compute an optimal policy for every possible starting state. An advantage of heuristic search is that it can focus computation on finding an optimal policy for a single starting state. However, it has not been used before to find solutions with loops, that is, solutions that take the form of finite-state controllers. A second contribution of this thesis is to show how to generalize heuristic search to find policies that take this more general form. Three algorithms that use heuristic search to solve POMDPs are presented. Two solve special cases of the POMDP problem. The third solves the general POMDP problem by iteratively improving a finite-state controller in the same way as policy iteration, but focuses computation where it is most likely to improve the value of the controller for a starting state.
563

Capacity planning in a semiconductor wafer fabrication facility with time constraints between process steps

Robinson, Jennifer K 01 January 1998 (has links)
Central to the advancement of the U.S. economy is the efficient production of semiconductors, which in turn depends upon having accurate methods of planning semiconductor wafer fabrication facility capacity. A characteristic of water fabrication that makes capacity planning particularly difficult is the presence of time constraints between process steps, also known as time bound sequences. In a time bound sequence, there exists a step that must be completed within some fixed time interval of an earlier step. An example in semiconductor manufacturing is a furnace operation that must be started within two hours of a prior clean operation. If more than two hours elapse before the furnace operation can begin, the job must be sent back to the clean operation for reprocessing. The capacity of a time bound sequence can be difficult to predict. At low equipment utilizations, lots flow through with few delays, and are rarely sent back for reprocessing. At higher arrival rates, however, or for highly variable systems, time bound sequences can rapidly become unstable. To know the capacity of some time bound sequences requires knowing the entire distribution of lot cycle times. This research uses simulation to understand the behavior of time bound sequences, and then develops analytic models to estimate their capacity. This dissertation focuses first on the simplest type of the bound sequences, those that involve only two operations. For such sequences, the time constraint only applies to the time in queue for the second operation. In this case, a simple approximation based on M/M/c queueing formulas is shown to perform quite well in predicting the probability of reprocessing. This approximation provides a bound that can easily be included in spreadsheet capacity models. A fluid model is then developed for the more complex situation of time bound sequences with intermediate operations. Based on the behavior of the model, several practical guidelines are given for planning capacity in the presence of time bound sequences. The most significant of these guidelines is a method for selecting time constraint values for which the probability of reprocessing is very small, so that systems will be well-behaved.
564

A dynamic theory for the integration of social and economic networks with applications to supply chain and financial networks

Wakolbinger, Tina B 01 January 2007 (has links)
Uzzi (1996, p. 674) highlighted that there is a “growing need to understand how social structure assists or impedes economic performance.” In this dissertation, I contribute to this understanding by constructing dynamic supernetwork models that explicitly integrate social networks with economic network models and that rigorously capture the role that relationship levels play. The research in this dissertation is motivated by the growing literature that empirically and theoretically highlights the importance of relationships in supply chains (cf. Cannon and Perreault (1999), Bernardes and Fensterseifer (2004), and Baker and Faulkner (2004)) and financial transactions (cf. Berger and Udell (1995), Anthony (1997), and Uzzi (1999)). As this literature shows, the existence of appropriate social networks can affect not only the risk associated with the transactions but also transaction costs. By explicitly including the role that relationships play in economic transactions, I extend the previous research on supply chain network models (see, for example, Nagurney, Dong, and Zhang (2002), Nagurney, Cruz, and Matsypura (2003), and Nagurney and Matsypura (2005)). Furthermore, I extend the literature on financial network models (see, for example, Nagurney and Ke (2001, 2003) and Nagurney and Cruz (2003a,b, 2004)). Specifically, I first develop a model consisting of an integrated supply chain and social network. I then construct a model consisting of an integrated financial and social network. Finally, I extend both these models to an international setting. The social networks consist of relationships of different strength as they have been described in the papers by Granovetter (1973), Freeman, Borgatti, and White (1991), and Golicic, Foggin, and Mentzer (2003). The “supernetwork” models describe how the behavior of the multicriteria decision-makers and induced flows influence the co-evolution of social and economic networks. Numerical examples highlight the unique ability of this framework to analyze the interaction between the social network and the economic network. The models are based on variational inequality theory for the study of the equilibrium states (cf. Nagurney (1999)) and projected dynamical systems theory for the study of the associated dynamics (cf. Nagurney and Zhang (1996a)).
565

Supply chain coordination in the presence of consumer returns

Ruiz-Benitez, Rocio 01 January 2007 (has links)
We study the effect that consumer returns have on the coordination of a two-echelon supply chain with a single manufacturer and a single retailer that faces stochastic demand and a certain proportion of consumer returns for a single product. Returned goods command a full refund and result in reverse logistics costs for both retailer and manufacturer. The manufacturer sets a wholesale price for the product and may also set a repurchase price at which she will buy any product left over at the retailer at the end of the selling season. The selling price for the product may be either given exogenously or a variable under the retailer's control given stochastic, price-dependent demand. The retailer sets the order quantity and in the latter case also the selling price. We analyze the optimal centralized and decentralized profit maximizing solutions and compare the optimal actions and profits with those associated with the classical model that ignores consumer returns. The results we obtain are counterintuitive: (1) higher profits and better coordination can be achieved when the players acting in a decentralized fashion do not consider any information about consumer returns as they make their pricing and ordering decisions, (2) retailer, manufacturer and total supply chain profits increase as the retailer faces a larger share of the logistics costs associated with consumer returns, (3) buy-back contracts may be detrimental to supply chain coordination if consumer returns are ignored in the decision-making process. We also study the case in which the retailer postpones his pricing decision until after demand uncertainty is resolved when commercial returns are present in the system. We observe that also, under the presence of commercial returns, the price postponement strategy leads to larger expected profits for both, manufacturer and retailer, and thus, a better coordination of the supply chain. In the last part of this dissertation, we study the Returns Allowance Credit Contract. This new type of contract is implemented when commercial returns are present in the supply chain and the retailer bears all the logistics costs associated with the returns. The manufacturer offers certain returns allowance credit in order not to lose retailer's good will.
566

Cycle decomposition, Steiner trees, and the facility layout problem

Keen Patterson, Margaret 01 January 2003 (has links)
The facility layout problem is modelled as a cycle decomposition process in which the maximum-weight clique and travelling salesman problems are utilized to extract cycles from the graph adjacency matrix. The space between cycles is triangulated so the graph is maximally planar. The adjacency graph is then systematically developed into a block plan layout. With use of the maximum-weight clique algorithm, the procedure addresses layout problems that are not 100% dense. Many examples are utilized to demonstrate the flexibility of the algorithm and the resulting adjacency graph and block plan layout drawings. The Steiner Circulation Network solution derived from an adjacency graph solution and its dual graph, provides a minimum cost system of hallways and connecting links for the material handling system. Using the flows between activities and departments in a layout problem, the circulation network provides the necessary link between the steps of finding the adjacency graph solution and finding useful block plan layout. A case study demonstrates how the solution for the layout and its material handling system can be integrated. Computational results up to size n = 100 are presented along with a comparative study with a competitive algorithm.
567

Analysis, design, and management of supply chain networks with applications to time-sensitive products

Yu, Min 01 January 2012 (has links)
With supply chains spanning the globe, and with increasing time-sensitivity for various products in many markets, timely deliveries are becoming a strategy, as important as productivity, quality, and even innovation (see, e.g., Gunasekaran, Patel, and McGaughey (2004), Christopher (2005), and Nagurney (2006)). A product is considered to be time-sensitive, if there is a strict time requirement regarding that product, either as a characteristic of the product itself or on the demand side. In particular, a time-sensitive product must have at least one of the following two properties: (1) the product loses its value rapidly, due to either obsolescence or perishability, which can lead to extra waste and cost, if unused; (2) the demand for it is sensitive to the elapsed time for the order fulfillment; the failure to satisfy the demand on-time may result in the loss of potential market share, or, even worse, additional injuries or death as in times of crises. This dissertation formulates, analyzes, and solves a spectrum of supply chain network problems for time-sensitive products, ranging from fast fashion to food to pharmaceuticals. Specifically, I first develop a model that captures the trade-offs between the operational costs and time issues in the apparel industry. I then construct a sustainable fashion supply chain network model under oligopolistic competition and brand differentiation. I, subsequently, capture the deterioration of fresh produce along the entire supply chain through arc multipliers with time decay. Finally, I consider the supply chain network design problem for critical needs products, as in times of crises and humanitarian relief operations. I also develop a supply chain network design/redesign model with multiple products, with particular relevance to healthcare. This dissertation consists of advances in the modeling, analysis, and design of supply chain networks for time-sensitive products, all unified through the methodology of variational inequality theory (see Nagurney (1999)), coupled with network theory and multicriteria decision-making. The framework captures the underlying behavior associated with the operation and management of the associated supply chains, whether that of central optimization or competition, allows for the graphical depiction of the supply chain network structures, and efficient and effective solution.
568

Novice drivers: Development and evaluation of training program for hazard anticipation, hazard mitigation and attention maintenance skills in complex driving scenarios

Mehranian, Hasmik 01 January 2013 (has links)
The overall goal of this research was to isolate the differences between inexperienced and experienced drivers in complex scenarios, to design a training program to reduce these differences in both simple and complex scenarios, and then to evaluate the effectiveness of the training program in both simple and complex scenarios. The results of Experiment 1 support the hypothesis that drivers with more driving experience will anticipate more possible hazards and will better mitigate hazards by driving slower in the segments of the road where hazards can materialize. Two training protocols, Multi-Skill (MulS) training and Placebo training, were used to train two groups of inexperienced drivers. Multi-Skill training program (MulS) consisted of three PC-based modules, each dedicated to one of the skills, and a driving simulator-based practice drive where users could practice their skills in simple and complex driving scenarios. Both the PC-based and simulator-based training modules were designed using methods that had proven successful in the design of previous training programs for hazard anticipation, hazard mitigation and attention maintenance performance in simple scenarios. The results of Experiment 2 support this hypothesis for two of the three skills, hazard anticipation and hazard mitigation. With respect to hazard anticipation, on average, MulS training increased hazard anticipation performance by 35 percentage points. With respect to hazard mitigation, MulS training decreased the average velocity of drivers' vehicles in the presence of hazards by 1.9 mph, whereas Placebo training increased the average velocity of drivers' vehicles in the presence of hazards by 4.1 mph. Overall, Multi-Skill training program proved to be a successful in improving user's performance in complex driving scenarios on two of the three critical driving skill skills, hazard anticipation and hazard mitigation.
569

Stochastic dynamic optimization models for societal resource allocation

Bayram, Armagan 01 January 2014 (has links)
We study a class of stochastic resource allocation problems that specifically deals with effective utilization of resources in the interest of social value creation. These problems are treated as a separate class of problems mainly due to the nonprofit nature of the application areas, as well as the abstract structure of social value definition. As part of our analysis of these unique characteristics in societal resource allocation, we consider two major application areas involving such decisions. The first application area deals with resource allocations for foreclosed housing acquisitions as part of the response to the foreclosure crisis in the U.S. Two stochastic dynamic models are developed and analyzed for these types of problems. In the first model, we consider strategic resource allocation decisions by community development corporations (CDCs), which aim to minimize the negative effects of foreclosures by acquiring, redeveloping and selling foreclosed properties in their service areas. We model this strategic decision process through different types of stochastic mixed-integer programming formulations, and present alternative solution approaches. We also apply the models to real-world data obtained through interactions with a CDC, and perform both policy related and computational analyses. Based on these analyses, we present some general policy insights involving tradeoffs between different societal objectives, and also discuss the efficiency of exact and heuristic solution approaches for the models. In the second model, we consider a tactical resource allocation problem, and identify socially optimal policies for CDCs in dynamically selecting foreclosed properties for acquisition as they become available over time. The analytical results based on a dynamic programming model are then implemented in a case study involving a CDC, and social return based measures defining selectivity rates at different budget levels are specified. The second application area involves dynamic portfolio management approaches for optimization of surgical team compositions in robotic surgeries. For this problem, we develop a stochastic dynamic model to identify policies for optimal team configurations, where optimality is defined based on the minimum experience level required to achieve the maximum attainable performance over all ranges of feasible experience measures. We derive individual and dependent performance values of each surgical team member by using data on operating room time and team member experience, and then use them as inputs to a stochastic programming based framework that we develop. Several insights and guidelines for dynamic staff allocation to surgical teams are then proposed based on the analytical and numerical results derived from the model.
570

International multi-sector, multi-instrument financial modeling and computation: Statics and dynamics

Siokos, Stavros 01 January 1997 (has links)
The goal of this dissertation is to provide a series of static and dynamic models of competitive multi-instrument, multi-sector, and multi-currency financial equilibrium which will yield the optimal composition of assets and liabilities in the portfolio of every sector of each country. The equilibrium market prices of every instrument in each currency, as well as the equilibrium exchange rate prices for each currency are also obtained. In addition, market imperfections such as taxes, transaction costs, price policy interventions, and the presence of financial hedging instruments, are taken into consideration. The models presented here are based on the fundamental economic theory of finance, and relax many of the assumptions that much of the literature is based upon. For example, there is no need for a risk free instrument or a global portfolio, and all sectors in the economy do not have to share homogeneous expectations on prices. In the contrary, heterogeneity of opinions plays a critical role on the determination of the asset allocation as well as in the price derivation. Moreover, sectors do not hold the same amount of capital, and are not subjected to the same type of transaction costs and taxes, since the models under consideration have the ability to impose taxes and transaction costs that depend both on the identity of a sector and on the type of an instrument. Moreover, the monetary authorities of each country (or currency) have the ability to apply different price floors and ceilings on every instrument so that they can control the market according to their strategies. All the models as well as the computational methods suggested here are based on the methodologies of finite-dimensional variational inequality theory for the exploration of statics and equilibrium states, and on projected dynamical systems theory for the study of dynamics and disequilibrium behavior. Simultaneously, visualization and formulation of financial problems as network flow problems provide one with the opportunity of applying network-based algorithms, coupled with the aforementioned methodologies, for computational purposes. The models presented here are accompanied by a detailed qualitative analysis that provides conditions of existence and uniqueness of equilibrium patterns as well as general sensitivity analysis results.

Page generated in 0.1288 seconds