• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 244
  • 41
  • 40
  • 24
  • 23
  • 9
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 745
  • 139
  • 128
  • 122
  • 105
  • 100
  • 82
  • 75
  • 72
  • 61
  • 58
  • 54
  • 53
  • 51
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analysis and Control of Batch Order Picking Processes Considering Picker Blocking

Hong, Soon Do 2010 August 1900 (has links)
Order picking operations play a critical role in the order fulfillment process of distribution centers (DCs). Picking a batch of orders is often favored when customers’ demands create a large number of small orders, since the traditional single-order picking process results in low utilization of order pickers and significant operational costs. Specifically, batch picking improves order picking performance by consolidating multiple orders in a "batch" to reduce the number of trips and total travel distance required to retrieve the items. As more pickers are added to meet increased demand, order picking performance is likely to decline due to significant picker blocking. However, in batch picking, the process of assigning orders to particular batches allows additional flexibility to reduce picker blocking. This dissertation aims to identify, analyze, and control, or mitigate, picker blocking while batch picking in picker-to-part systems. We first develop a large-scale proximity-batching procedure that can enhance the solution quality of traditional batching models to near-optimality as measured by travel distance. Through simulation studies, picker blocking is quantified. The results illustrate: a) a complex relationship between picker blocking and batch formation; and b) a significant productivity loss due to picker blocking. Based on our analysis, we develop additional analytical and simulation models to investigate the effects of picker blocking in batch picking and to identify the picking, batching, and sorting strategies that reduce congestion. A new batching model (called Indexed order Batching Model (IBM)) is proposed to consider both order proximity and picker blocking to optimize the total order picking time. We also apply the proposed approach to bucket brigade picking systems where hand-off delay as well as picker blocking must be considered. The research offers new insights about picker blocking in batch picking operations, develops batch picking models, and provides complete control procedures for large-scale, dynamic batch picking situations. The twin goals of added flexibility and reduced costs are highlighted throughout the analysis.
42

Dynamic Control of Serial-batch Processing Systems

Cerekci, Abdullah 14 January 2010 (has links)
This research explores how near-future information can be used to strategically control a batch processor in a serial-batch processor system setting. Specifically, improved control is attempted by using the upstream serial processor to provide near-future arrival information to the batch processor and further meet the re-sequencing requests to shorten critical products? arrival times to the batch processor. The objective of the research is to reduce mean cycle time and mean tardiness of the products being processed by the serial-batch processor system. This research first examines how mean cycle time performance of the batch processor can be improved by an upstream re-sequencing approach. A control strategy is developed by combining a look-ahead control approach with an upstream re-sequencing approach and is then compared with benchmark strategies through simulation. The experimental results indicate that the new control strategy effectively improves mean cycle time performance of the serial-batch processor system, especially when the number of product types is large and batch processor traffic intensity is low or medium. These conditions are often observed in typical semiconductor manufacturing environments. Next, the use of near-future information and an upstream re-sequencing approach is investigated for improving the mean tardiness performance of the serial-batch processor system. Two control strategies are devised and compared with the benchmark strategies through simulation. The experimental results show that the proposed control strategies improve the mean tardiness performance of the serial-batch processor system. Finally, the look-ahead control approaches that focus on mean cycle time and mean tardiness performances of the serial-batch processor system are embedded under a new control strategy that focuses on both performance measures simultaneously. It is demonstrated that look-ahead batching can be effectively used as a tool for controlling batch processors when multiple performance measures exist.
43

The Method of Batch Inference for Multivariate Diffusions

Lysy, Martin January 2012 (has links)
Diffusion processes have been used to model a variety of continuous-time phenomena in Finance, Engineering, and the Natural Sciences. However, parametric inference has long been complicated by an intractable likelihood function, the solution of a partial differential equation. For many multivariate models, the most effective inference approach involves a large amount of missing data for which the typical Gibbs sampler can be arbitrarily slow. On the other hand, a recent method of joint parameter and missing data proposals can lead to a radical improvement, but their acceptance rate scales exponentially with the number of observations. We consider here a method of dividing the inference process into separate data batches, each small enough to benefit from joint proposals. A conditional independence argument allows batch-wise missing data to be sequentially integrated out. Although in practice the integration is only approximate, the Batch posterior and the exact parameter posterior can often have similar performance under a Frequency evaluation, for which the true parameter value is fixed. We present an example using Heston’s stochastic volatility model for financial assets, but much of the methodology extends to Hidden Markov and other State-Space models. / Statistics
44

The safe design of computer controlled pipeless batch plants

Mushtaq, Fesil January 2000 (has links)
High profit (low volume) products are very attractive economically, and are influencing the direction of manufacture towards product based batch processes. One new system which has a great deal of potential is a "pipeless" plant, in which the reactor moves to different areas of the plant where heating, agitation etc. takes place. There are obvious advantages in its use in providing a means of quickly responding to fast market changes while maintaining high product quality with reduced waste. The basic concept has been successfully demonstrated with several production plants already in operation, mainly in Japan. Nevertheless the safety issues associated with pipeless plants have not been dealt with. Three main areas of further work have been identified in the safe design of computer controlled pipeless batch plants: process, computer control, and scheduling safety. In essence it is a batch process that is carried out, and therefore entails all the safety issues associated with a batch process, such as the sharing of resources. As with all new processes, it is necessary to identify and eliminate as many hazards as possible at the design stage. Computers can introduce hazards as well as benefits. There is extensive use of computer control in automated pipeless plants, and the primary manner in which problems occur is through hardware and software failures. Possible hazards need to be identified and eliminated at the design stage, without losing the benefits of plant flexibility and speed of product changeover. Scheduling is usually concerned with optimum product output, and does not consider safety. One of the biggest problems with moving reactors is collisions. To overcome, or minimise the possibility of this problem, the plant layout and schedule require careful consideration. Simulation is a very useful tool for demonstrating the interaction between the two. The aim of this research is to develop an integrated approach to hazard identification and safety requirement specification. The results of which should be a methodology that allows the user to produce a safe design for an economically attractive pipeless plant for batch processes.
45

On Synthesis, design and resource optimization in multipurpose batch plants

Seid, Rashid Esmael January 2013 (has links)
In recent years, batch processes have been getting more attention due to their suitability for the production of small volume, high value added products. The flexibility of batch plants allows the production of different products within the same facility which mandates equipment sharing. Batch manufacturing is typically used in the pharmaceutical, polymer, food and specialty chemical industries as demands for such products are highly seasonal and are influenced by changing markets. Despite the advantage of batch plants being flexible, they also pose a challenging task to design, synthesize and operate, compared to their continuous counterparts. The profitability of these batch plants is highly dependent on the way the synthesis, design and operation is optimized. Since different types of resources (raw materials, equipment, utilities and manpower) need to be shared by a number of process operations to produce a variety of products, modeling and optimizing the design and operation of batch plants are important for economic benefits. The growing awareness of civil society for the environment and the resulting regulations introduced by national states have resulted in chemical industries considering process integration to reduce their energy and process water requirements. Energy optimization and the optimization of water use have mainly been treated as separate problems in literature. The batch production schedules resulting from each of these formulations do not guarantee that the plant is operated optimally. Consequently, it is required to develop a formulation that caters for opportunities that exist for both wastewater minimization and energy integration. This may result in production schedules that improve the operation of the batch plant when compared to optimizing water and energy separately. Presented in this thesis is a mathematical technique that addresses optimization of both water and energy, while simultaneously optimizing the batch process schedule. The scheduling framework used in this study is based on the formulation by Seid and Majozi (2012). This formulation has been shown to result in a significant reduction of computational time, an improvement of the objective function and leads to fewer time iii points required to solve the scheduling problem. The objective is to improve the profitability of the plant by minimizing wastewater generation and utility usage. From a case study it was found that through only applying water integration the total cost is reduced by 11.6%, by applying only energy integration the total cost is reduced by 29.1% and by applying both energy and water integration the total cost is reduced by 34.6%. This indicates that optimizing water and energy integration in the same scheduling framework will reduce the operating cost and environmental impact significantly. This thesis also presents a mathematical model for design and synthesis of batch plants. The conceptual design problem must determine the number and capacity of the major processing equipment items, pipe connections and storage tanks so as to meet production objectives at the lowest possible capital and operating cost. A recent robust scheduling model based on continuous-time representation is used as a platform for the synthesis and design problem. An improved objective value (revenue) of 228.6% is obtained by this work compared to the recent published models for the design and synthesis problem. Compared with other formulations, the formulation presented in this thesis gives a smaller size mathematical model that required less binary variables, continuous variables and constraints. The presented model also considers costs that arise from the pipe network and consequently, determines the optimal pipe network which should exist between different pieces of equipment. Finally, the medium-term scheduling problem for a multiproduct batch plant is addressed. The intractability of the short-term scheduling models when directly applied to the medium-term scheduling problems is solved by applying a decomposition method. The decomposition method has two level mathematical models. The first level determines the type of products and their amount to be produced in each scheduling subproblem to satisfy the market requirement. The second level determines the detailed sequencing of tasks for the tractable size of the subproblems. The recently published robust short-term scheduling model based on continuous time is extended for solving the scheduling supbroblems of the second level decomposition model. The model is applied in solving the medium-term scheduling problem of a pharmaceutical facility specializing in animal vaccines using the actual plant data. The model effectively solved a makespan minimization problem for the medium-term scheduling horizon of almost 13 weeks. / Thesis (PhD)--University of Pretoria, 2013. / gm2013 / Chemical Engineering / unrestricted
46

Design of integrated batch processes

Sharif, Mona Adel January 1999 (has links)
No description available.
47

Optimization of Batch Sizes : A simulation based study on how equally sized batches affect production planning

Hjertqvist, Elin, Malin, Östman January 2017 (has links)
Volvo Trucks is one of the world’s largest truck manufacturers, with a cab production plant located in Umeå, Sweden. The variation in batch sizes of the different components causes variation in the length of the production plan of subparts, and thus variation in production lead times. This project aims to examine how equally sized batches affect the length of the production plan, and what batch size is optimal to achieve efficient production planning. The examination was conducted with respect to lean principles and a mathematical model was built to simulate the use of different batch sizes. In order to run the simulation, both historical and new data was used. Parameters interesting to examine for the result are the length and variation of the production plan, number of set-ups, production frequency, inventory levels and the annual production. A batch size of 2.0 hours is optimal as the length of the production plan then varies the least in the allowed interval. Equally sized batches exclusively were not found to contribute to a more efficient production planning. Smaller batch sizes in combination with equally sized batches were however shown to decrease the variation in the production plan, and to result in increased stock turnover and decreased inventory levels. / Volvo Lastvagnar är en av världens största lastbilstillverkare, med en fabrik för tillverkning av hytter placerad i Umeå. Variationen i orderstorlekar för olika komponenter orsakar variation i längden av produktionsplanen, vilket i sin tur leder till varierade ledtider. Detta projekt syftar till att undersöka hur utjämnade orderstorlekar påverkar längden av produktionsplanen och vilken orderstorlek som är optimal för en effektiv produktionsplanering. Projektet utfördes med hjälp av leanprinciper och en matematisk modell av produktionen byggdes för att simulera användningen av olika orderstorlekar. För att köra simuleringen användes både historiskt och nytt data. Intressanta parametrar för resultatet är längden och variationen i produktionsplan, antal verktygsbyten, produktionsfrekvens, lagernivåer och årlig produktion. En orderstorlek på 2,0 timmar är optimal eftersom längden på produktionsplanen då har minst variation inom det tillåtna intervallet. Enbart jämna orderstorlekar visade sig inte bidra till en effektivare produktionsplanering, men mindre orderstorlekar i kombination med jämna orderstorlekar visade sig minska variationen i produktionsplan och resulterade i ökad lageromsättning samt minskade lagernivåer.
48

Design of a real biotechnological multiproduct batch plant with an optimization based approach

Sandoval Hevia, Gabriela Daniela January 2016 (has links)
Doctora en Ciencias de la Ingeniería, Mención Química / Productos biotecnológicos, como los biofarmacéuticos entre otros, son productos cuyas tecnologías de producción están en constante desarrollo. Adicionalmente, sus escalas de producción son pequeñas haciendo de las plantas batch las más apropiadas para su producción. En particular, plantas batch multi-producto permiten la producción de una variedad de productos biotecnológicos con varias etapas en común. Una forma de modelar el diseño de plantas batch multi-producto es mediante el enfoque basado en optimización que fue estudiado por primera vez para este tipo de plantas por Robinson y Lonkar, quienes estudiaron el diseño de este tipo de plantas dimensionando los equipos que la conforman. Pese a los múltiples avances en el área, los que incluyen decisiones como la duplicación de unidades, la disposición de tanques de almacenamiento intermedio, programación de la producción y consideración ambientales, entre otras mejores, aún existe una falta de trabajos donde este tipo de enfoques es aplicado en plantas reales. En este trabajo se estudia una reformulación Entera-Mixta Lineal (MILP) del problema Entero-Mixto No-Lineal (MINLP) que resulta al plantear el modelo para el diseño de una planta biotecnológica batch multi-producto. En un primer paso se estudia una reformulación MILP que permite modelar el diseño de una planta utilizando tamaños de equipos en un conjunto continuo y una selección de hosts en un conjunto discreto de opciones. Esta reformulación hace uso de técnicas avanzadas de reformulación, probando ser escalable y confiable para su aplicación en casos reales. En un segundo paso, la reformulación MILP original fue modificada para la inclusión de una selección de equipos, tanto en un conjunto discreto, como en uno continuo, dando un enfoque más realista para poder modelar una planta biotecnológica batch multi-producto; donde unidades como los reactores pueden ser construidos de acuerdo con las necesidades del cliente, sin embargo, unidades como las columnas cromatográficas sólo están disponibles en tamaños discretos dados por el proveedor. Información de procesos reales que formaban parte de una planta batch multi-producto real permitieron la determinación de los parámetros del modelo y una comparación entre las distintas lineas de producción versus la planta real mostraron que este tipo de modelos puede permitir grandes ahorros en los costos de los principales equipos de la planta. Finalmente, como el enfoque estudiado utiliza software de modelación y optimización, el modelo es más amigable para quienes puedan utilizarlo en la práctica. Sin embargo niveles más bajos de implementación podrían mejorar los tiempos de resolución permitiendo la inclusión de formulaciones más complejas, como por ejemplo, la inclusión de costos u objetivos de producción variables.
49

Minimizing Makespan for Hybrid Flowshops with Batch, Discrete Processing Machines and Arbitrary Job Sizes

Zheng, Yanming 10 August 2010 (has links)
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF|batch1, sj|Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93% in average within a negligible time when problem size is less than 50 jobs.
50

Correction of batch effects in single cell RNA sequencing data using ComBat-Seq

Dullea, Jonathan Tyler 20 February 2021 (has links)
Single cell RNA sequencing allows expression profiles for individual cells to be obtained thus offering unprecedented insight into the behavior of individual cells. Insight gained from exploration of individual cells has implications in both cancer and developmental biology. Much of the power of these models is derived from the shear amount and granularity of the data that can be collected; however, with this power comes the deleterious introduction of batch effects. Samples sequenced on different days, by different technicians can show variance that cannot be attributed to biological condition, but rather is only due to the batch in which it was sequenced. These batch effects can cause alterations to the perceived relationships between the main effect and the outcome of interest, for instance cancer status, the main effect of cancer status may be hidden by the unwanted and unmodeled variance. Two known methods for the correction of batch effects in bulk RNA sequencing data are ComBat-Seq and Surrogate Variable Analysis; in this work, we demonstrate that when cell-type is known, inclusion of that covariate in the ComBat-Seq results in an appropriate correction of the batch effect. We also demonstrate that when cell-type is not known, SVA can be used to infer cell-type information form the latent structure of the count matrix with some loss of accuracy compared to the correction with cell type. This cell type information can be used in place of the actual cell-type covariate information to correct single cell RNA sequencing data with ComBat-Seq; inclusion of surrogate variables helps the accuracy of the correction in certain scenarios. Additionally, in the case where cell-type is not known, and the cell proportions are balanced between batches we demonstrate that ComBat-Seq can be used naive to cell-type information. The efficacy of this procedure is demonstrated with two simulated datasets and a dataset containing Jurkat and t293 cells. These results are then compared to Harmony, a recently reported batch correction algorithm. The procedure, herein reported, has benefits over harmony in certain situations such as when a counts matrix is needed for further analysis or when there is thought to be substantial intra-cell-type variability across different batches.

Page generated in 0.0449 seconds