1 |
Bottleneck improvement using simulation based optimizationSyed, Hidayath Ulla, Thajuddin, Shamnath January 2016 (has links)
Manufacturing companies are constantly looking for new, innovative technologies andtools to find out the real constraints and bottlenecks that impede the performance oftheir production systems. There are several approaches and methods that have beendeveloping from decades to overcome these constraints of production processes butthey are not sufficient to pinpoint the exact location and its severity. They also generallyfall short to suggest the way to implement the right actions in the right order, to avoidsub-optimizations and wastes in time and expenses. So according to recent research inusing simulation based optimization, it is believed that some more accurate and efficientmethodology for supporting decision making in production systems development andimprovement is badly needed. SCORE (Simulation-based Constraint Removal) is apromising methodology for identifying and ranking the bottlenecks of productionsystems that utilizes simulation-based multi-objective optimization (SMO), which wasdeveloped by Pehrsson (2013) as a part of his PhD work. The main principle of thisnew methodology is the application of SMO with the objectives to maximize thethroughput and minimize the number of required improvement actions simultaneously.Additionally, by using post-optimality analysis to process the generated optimizationdataset, the precise improvement actions needed to attain a certain level of performanceof the production line are automatically put into a rank order. The main aim of thisproject is therefore to apply this new technique in a real-world context, in order tounderstand how far this technique will support for decision making, by conducting asimulation-based bottleneck analysis in one of the major Volvo group trucks facilities.This is to find out the bottlenecks and optimize it in order to increase the overallproductivity. Three research questions related to the effectiveness and accuracy of themethodology will be answered through this real-world application study. / (STREAMOD)Streamline modeling and Decision support for Facts based Production development
|
2 |
A Model for Increasing Profitability of Raw Mill Processes through Effective Maintenance : A Case Study / En modell för att öka lönsamheten av råmjölsproduktion genom effektivt underhåll : En fallstudieGörnebrand, Johan, Johansson, John January 2012 (has links)
Theorists have shown that continuous improvements of the organization are a requirement in today’s global market. The nature of the required improvements can vary from business to business. This study is conducted within a business where the marked demand is higher than the average production rate of the product. Improvements in this case will be referred to in-crease the production rate by assessing the bottleneck section of the plant. In order to increase the production rate, the unplanned stoppages of the bottleneck section needs to be reduced. By reducing the unplanned stoppages of the bottleneck section the profitability of the business will increase, since the bottleneck section is regulating the output of the plant. The angle from which the availability will be increased is through maintenance. A model is developed and implementation example is presented in order to show the increase in the availability of the bottleneck section. The model will provide a systematic approach to define market demand, bottlenecks, severity of assets, criticality of failures and a maintenance decision making pro-cess regarding the profitability of maintenance investments. The result of the model is in-creased availability of the bottleneck section, leading to a higher profit. The study is conduct-ed on Cementa AB, owned by Heidelberg Cement Group. / Teoretiker har visat att ständiga förbättringar inom organisationer är ett måste i dagens globala marknad. Karaktären av de ständiga förbättringarna kan variera från organisation till organisation. Denna studie är genomförd på ett företag där efterfrågan av produkten är högre än den genomsnittliga produktionen av produkten. Förbättringar i detta fall kommer innebära att öka produktionen genom att angripa flaskhalsen i fabriken. För att kunna öka produktionen måste den oplanerade stopptiden av flaskhalsen reduceras. Genom att reducera den oplanerade stopptiden av flaskhalsen kommer vinsterna i organisationen öka, detta eftersom flaskhalsen reglerar produktionsvolymen av fabriken. Tillgängligheten skall ökas genom underhåll av flaskhalsen. En modell är utvecklad och kommer implementeras som ett exempel för att öka tillgängligheten genom underhåll. Modellen kommer ge ett systematiskt arbetssätt för att identifiera efterfrågan, flaskhalsar, allvarlighetsgraden av tillgångar (maskiner/objekt), kritikalitetet av potentiella fel och en grund för en beslutsprocess gällande lönsamheten av underhållsinvesteringar. Resultatet av modellen är ökad tillgänglighet av flaskhalsen som leder till ökad vinst. Studien är gjort på Cementa AB, ägd av Heidelberg Cement Group.
|
3 |
Computing Exact Bottleneck Distance on Random Point SetsYe, Jiacheng 02 June 2020 (has links)
Given a complete bipartite graph on two sets of points containing n points each, in a bottleneck matching problem, we want to find an one-to-one correspondence, also called a matching, that minimizes the length of its largest edge; the length of an edge is simply the Euclidean distance between its end-points. As an application, consider matching taxis to requests while minimizing the largest distance between any request to its matched taxi. The length of the largest edge (also called the bottleneck distance) has numerous applications in machine learning as well as topological data analysis. One can use the classical Hopcroft-Karp (HK-) Algorithm to find the bottleneck matching. In this thesis, we consider the case where A and B are points that are generated uniformly at random from a unit square. Instead of the classical HK-Algorithm, we implement and empirically analyze a new algorithm by Lahn and Raghvendra (Symposium on Computational Geometry, 2019). Our experiments show that our approach outperforms the HK-Algorithm based approach for computing bottleneck matching. / Master of Science / Consider the problem of matching taxis to an equal number of requests. While matching them, one objective is to minimize the largest distance between a request and its match. Finding such a matching is called the bottleneck matching problem. In addition, this optimization problem arises in topological data analysis as well as machine learning. In this thesis, I conduct an empirical analysis of a new algorithm, which is called the FAST-MATCH algorithm, to find the bottleneck matching. I find that, when a large input data is randomly generated from a unit square, the FAST-MATCH algorithm performs substantially faster than the classical methods.
|
4 |
Aggregated Learning: An Information Theoretic Framework to Learning with Neural NetworksSoflaei Shahrbabak, Masoumeh 04 November 2020 (has links)
Deep learning techniques have achieved profound success in many challenging real-world applications, including image recognition, speech recognition, and machine translation. This success has increased the demand for developing deep neural networks and more effective learning approaches.
The aim of this thesis is to consider the problem of learning a neural network classifier and to propose a novel approach to solve this problem under the Information Bottleneck (IB) principle. Based on the IB principle, we associate with the classification problem a representation learning problem, which we call ``IB learning". A careful investigation shows there is an unconventional quantization problem that is closely related to IB learning. We formulate this problem and call it ``IB quantization". We show that IB learning is, in fact, equivalent to the IB quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a vector quantization approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework that we call ``Aggregated Learning (AgrLearn)", for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. In other words, AgrLearn can simultaneously optimize against multiple data samples which is different from standard neural networks. In this learning framework, two classes are introduced, ``deterministic AgrLearn (dAgrLearn)" and ``probabilistic AgrLearn (pAgrLearn)".
We verify the effectiveness of this framework through extensive experiments on standard image recognition tasks. We show the performance of this framework over a real world natural language processing (NLP) task, sentiment analysis. We also compare the effectiveness of this framework with other available frameworks for the IB learning problem.
|
5 |
Envelope: estimation of bottleneck and available bandwidth over multiple congested linksBhati, Amit 12 April 2006 (has links)
Bandwidth estimation has been extensively researched in the past. The majority of existing methods assume either negligible or fluid cross-traffic in the network during the analysis. However, on the present-day Internet, these assumptions do not always hold right. Hence, over such paths the existing bandwidth estimation techniques become inaccurate. In this thesis, we explore the problem assuming arbitrary cross-traffic and develop a new probing method called Envelope, which can simultaneously estimate bottleneck and available bandwidth over an end-to-end path with multiple heavily congested links. Envelope is based on a recursive extension of the stochastic queuing model first proposed by Kang, Liu, Dai and Loguinov (2004), and a modified packet-train methodology. We use two small packets to surround the probing packet-trains and preserve the inter-packet spacing of probe traffic at each router in the path-suffix. The preserved spacings are then used by the receiver to estimate bandwidth. We first reproduce results for a single congested router case using the model proposed by Kang et al. Next, we extend it to the case of multiple congested routers with arbitrary cross-traffic and develop the methodology Envelope. We evaluate the performance of Envelope in various network path topologies and cross-traffic conditions through extensive NS-2 simulations. We also evaluate various probe-traffic parameters which affect the accuracy of this method and obtain the range of values for these parameters that provide good estimation results. Finally, we compare the bandwidth estimation results of our method with the results of other existing methods such as IGI (2003) , Spruce (2003), Pathload (2002), and CapProbe (June 2004) using simulation in Network Simulator (NS-2) with varied network topologies and cross-traffic.
|
6 |
Envelope: estimation of bottleneck and available bandwidth over multiple congested linksBhati, Amit 12 April 2006 (has links)
Bandwidth estimation has been extensively researched in the past. The majority of existing methods assume either negligible or fluid cross-traffic in the network during the analysis. However, on the present-day Internet, these assumptions do not always hold right. Hence, over such paths the existing bandwidth estimation techniques become inaccurate. In this thesis, we explore the problem assuming arbitrary cross-traffic and develop a new probing method called Envelope, which can simultaneously estimate bottleneck and available bandwidth over an end-to-end path with multiple heavily congested links. Envelope is based on a recursive extension of the stochastic queuing model first proposed by Kang, Liu, Dai and Loguinov (2004), and a modified packet-train methodology. We use two small packets to surround the probing packet-trains and preserve the inter-packet spacing of probe traffic at each router in the path-suffix. The preserved spacings are then used by the receiver to estimate bandwidth. We first reproduce results for a single congested router case using the model proposed by Kang et al. Next, we extend it to the case of multiple congested routers with arbitrary cross-traffic and develop the methodology Envelope. We evaluate the performance of Envelope in various network path topologies and cross-traffic conditions through extensive NS-2 simulations. We also evaluate various probe-traffic parameters which affect the accuracy of this method and obtain the range of values for these parameters that provide good estimation results. Finally, we compare the bandwidth estimation results of our method with the results of other existing methods such as IGI (2003) , Spruce (2003), Pathload (2002), and CapProbe (June 2004) using simulation in Network Simulator (NS-2) with varied network topologies and cross-traffic.
|
7 |
Holistic Mine Management By Identification Of Real-Time And Historical Production BottlenecksKahraman, Muhammet Mustafa January 2015 (has links)
Mining has a long history of production and operation management. Economies of scales have changed drastically and technology has transformed the mining industry significantly. One of the most important technological improvements is increased equipment, human, and plant tracking capabilities. This provided a continuous data stream to the decision makers, considering dynamic operational conditions. However, managerial approaches did not change in parallel. Even though many process improvement tools using equipment/human/plant tracking capabilities were developed (Fleet Management Systems, Plant Monitoring Systems, Workforce Management Systems etc.), to date there is no holistic approach or system to manage the entire value chain in mining. Mining operations are designed and managed around the already known system designated bottlenecks. However, contrary to common belief in mining, bottlenecks are not static. They can shift from one process or location to another. It is important for management to be aware of the new bottlenecks, since their decisions will be effected. Therefore, identification of true bottlenecks in real-time will help tactical level decisions (use of buffers, resource transfer), and identification of historical bottlenecks will help strategic-level decisions (investments, increasing capacity etc.). This thesis aims to address the managerial focus on the true bottlenecks. This is done by first identifying and ranking true bottlenecks in the system. The study proposes a methodology for creating Bottleneck Identification Model (BIM) that can identify true bottlenecks in a value chain in real-time or historically, depending on the available data. This approach consists of three phases to detect and rank the bottlenecks. In the first phase, the system is defined and variables are identified. In the second phase, the capacity, rates, and buffers are computed. In the third phase, considering particularities of the mine exceptions are added by taking mine characteristics into account, and bottlenecks are identified and ranked.
|
8 |
Bottleneck Identification using Data Analytics to Increase Production CapacityGanss, Thorsten Peter January 2021 (has links)
The thesis work develops an automated, data-driven bottleneck detection procedure based on real-world data. Following a seven-step process it is possible to determine the average as well as the shifting bottleneck by automatically applying the active period method. A detailed explanation of how to pre-process the extracted data is presented which is a good guideline for other analysists to customize the available code according to their needs. The obtained results show a deviation between the expected bottleneck and the bottleneck calculated based on production data collected in one week of full production. The expected bottleneck is currently determined by the case company by measuring cycle times physically at the machine, but this procedure does not represent the whole picture of the production line and is therefore recommended to be replaced by the developed automated analysis. Based on the analysis results, different optimization potentials are elaborated and explained to improve the data quality as well as the overall production capacity of the investigated production line. Especially, the installed gantry systems need further analysis to decrease their impact on the overall capacity. As for data quality, especially, the improvement of the machines data itself as well as the standardization of timestamps should be focused to enable better analysis in the future. Finally, future recommendations mainly suggest to run the analysis several times with new data sets to validate the results and improve the overall understanding of the production lines behavior. / Detta examensarbete utvecklar en process för en automatiserad, datadriven flaskhalsidentifiering baserad på verkliga data. Följt av en sjustegsprocess ges det möjlighet att bestämma den genomsnittliga och den varierande flaskhalsen genom en automatisk implementering av ”the active period method”. En detaljerad förklaring av hur man förbehandlar informationen som extraherats är presenterat vilket är en god riktlinje för andra analytiker för att anpassa den tillgängliga koden utifrån deras behov. Det samlade resultatet illustrerar en avvikelse mellan den förväntade flaskhalsen och den flaskhalsen som utgår ifrån beräkningar av tillverkningsdata ansamlat i en vecka av full produktion. Den förväntade flaskhalsen är för nuvarande bestämt av fallets företag genom en fysisk mätning av cykeltiderna på maskinen, däremot är denna process inte representativ för helhetsbilden på tillverkningslinjen och det är därvid rekommenderat att ersätta den föregående flaskhalsidentifieringen med den utvecklade automatiserade analysen. Baserat på analysens resultat framkom det olika optimiseringsmöjligheter som är utvecklade och klargjorda för att förbättra kvaliteten på data samt den övergripande produktionskapaciteten av den undersökta produktionslinjen. Speciellt när det gällerde installerade portalsystemen så behövs det en fördjupande analys för att minimera dess verkan på den översiktliga kapaciteten. När det gäller datakvalitet, speciellt förbättringen av maskindata, behövs det en standardiserad tidsstämpling för att utföra enbättre analys i framtiden. De framtida rekommendationerna föreslår huvudsakligen att köra analysen ett flertal gånger med nya datauppsättningar för att validera resultaten och förbättra den övergripliga uppfattningen av produktionslinjens beteende.
|
9 |
Computer Music Composition using Crowdsourcing and Genetic AlgorithmsKeup, Jessica Faith 01 January 2011 (has links)
When genetic algorithms (GA) are used to produce music, the results are limited by a fitness bottleneck problem. To create effective music, the GA needs to be thoroughly trained by humans, but this takes extensive time and effort. Applying online collective intelligence or "crowdsourcing" to train a musical GA is one approach to solve the fitness bottleneck problem. The hypothesis was that when music was created by a GA trained by a crowdsourced group and music was created by a GA trained by a small group, the crowdsourced music would be more effective and musically sound. When a group of reviewers and composers evaluated the music, the crowdsourced songs scored slightly higher overall than the songs from the small-group songs, but with the small number of evaluators, the difference was not statistically significant.
|
10 |
The Key Successful Factor and process analysis for The Company to implement the ERPChen, Yu-Chung 22 August 2011 (has links)
In recent years, more and more rapidly changing industry environment, enterprises face the challenge of more intensive, rapid response and then how to stay competitive, re-use of information technology has leapt to the table; contingent competing in the business community into the building process in the ERP, From time to time to hear the company spent a lot of money, but often can not feel the corresponding benefits of ERP, gradually "ERP" in the public mind, by the tool should be as effective competition, and gradually became a large enterprise proprietary products or a modern enterprise to determine whether the "symbol", it is deeply regrettable.
As "enterprise of change management into ERP" (Electronic Commerce Research, Winter 2003, Volume I, Phase II) mentioned in the article, the enterprise is full of setbacks to promote the main reason why ERP is: underestimation of change management, budget overruns and time schedule was repeatedly pushed back, in addition to employee resistance to mind is the reason can not be ignored. In other words, is not a warning to avoid the ERP can successfully import it?
In the end what is the KSF for Company to implement ERP? Is ERP implementation a management innovation? How to make a management innovation? What are the bottlenecks to implement the ERP? How to break them? Can the ERP improve the performance of the business operation? Does ERP implementation needs lots of resources, ex: manpower or money? Is it right that ERP is the only way lead to the growth of a company, and it will lose its competitiveness without ERP?
The ERP will be present to promote the success of the enterprise as a case study, to understand their development process, encountered those problems? How do they overcome? After implementation of ERP, what company can benefit ? Hope that the success stories by practitioners to explore the key factors to success, but also hope to set out clearly the importance of promoting the implementation of ERP programs, and touch to what the problem might be, how to solve in order to improve chances of success.
|
Page generated in 0.0223 seconds