• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimal part delivery dates in small lot stochastic assembly systems

Srivastava, Rajiv K. January 1989 (has links)
An important issue in the design and operation of assembly systems is the coordination of part deliveries and processing operations. These decisions can have a significant impact on inventory cost and customer service. The problem is especially complex when actual delivery and processing times are stochastic in nature, as is the case in small lot manufacturing. In this research a new methodology is developed for determining optimal part delivery dates in stochastic small lot assembly systems. This methodology is based on the descriptive model that comprises of taking the maximum of several random variables. The part arrival and processing times are assumed to follow various known probability distributions. The model includes consideration of limited buffers between stations. The overall objective is to minimize the expected total of part and subassembly inventory cost, makespan cost and tardiness cost. An approach based on the optimization of individual stations in isolation is used to obtain the part delivery dates at each station. Comparison of the approach with the nonlinear programming based approach to the problem indicates that it generates almost as good solutions in a fraction of the computation time. This approach is then used to study system behavior under various operating conditions. Results indicate that the Iognormal and gamma distributions result in higher total costs than the normal distribution. However, the normal distribution can be used to determine part delivery dates even if the actual distribution is Iognormal or gamma, with relatively small errors compared to the solutions obtained using the correct distribution. Variability is the most important factor in the design of the system, and affects the determination of due dates, buffer capacity requirements, choice of distribution, and estimates of system performance. The role of buffer capacities, however, is not very critical in the design of small lot unbalanced lines. / Ph. D.
2

Learning in Partially Observable Markov Decision Processes

Sachan, Mohit 21 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Learning in Partially Observable Markov Decision process (POMDP) is motivated by the essential need to address a number of realistic problems. A number of methods exist for learning in POMDPs, but learning with limited amount of information about the model of POMDP remains a highly anticipated feature. Learning with minimal information is desirable in complex systems as methods requiring complete information among decision makers are impractical in complex systems due to increase of problem dimensionality. In this thesis we address the problem of decentralized control of POMDPs with unknown transition probabilities and reward. We suggest learning in POMDP using a tree based approach. States of the POMDP are guessed using this tree. Each node in the tree has an automaton in it and acts as a decentralized decision maker for the POMDP. The start state of POMDP is known as the landmark state. Each automaton in the tree uses a simple learning scheme to update its action choice and requires minimal information. The principal result derived is that, without proper knowledge of transition probabilities and rewards, the automata tree of decision makers will converge to a set of actions that maximizes the long term expected reward per unit time obtained by the system. The analysis is based on learning in sequential stochastic games and properties of ergodic Markov chains. Simulation results are presented to compare the long term rewards of the system under different decision control algorithms.

Page generated in 0.0941 seconds