Spelling suggestions: "subject:"optimization"" "subject:"optimisation""
1 |
Operation Oriented Digital Twin of Hydro Test RigKhademi, Ali January 2022 (has links)
It has become increasingly important to introduce the Digital Twin in additive manufacturing as it is perceived as a promising step forward in its development and a vital component of Industry 4.0. Digital Twin is an up-to-date representation of a real asset in operation. The aim of this thesis is to develop a Digital Twin of a hydro test rig. Digital Twins are created by developing and simulating mathematical models, which should be integrated and validated. A downscale turbine test rig in which its runner and drafttube are replicates of the Porjus U9 turbine. This test rig is located in the John-Fieldlaboratory of the Division of Fluid and Experimental Mechanics at Luleå University of Technology (LTU). A mathematical model of the test rig has been made in the MATLAB environment Simulink. The test rig itself has components such as a Kaplan turbine, hydraulic pump, magnetic braking system, rotor, and a flow meter in a closed loop system. It is known that some test rig parameters are unknown, and so two methods have been used to optimize these parameters during the validation of the mathematical model. Optimization means finding either the maximum or the minimum of the target function with a particular set of parameters. An optimization of seven total parameters was made for the mathematical model in Simulink. The parameters were optimized using two different methods: Fmincon in MATLAB and Bayesian Optimization, a machine learning tool. Due to the fact that Fmincon could only find local minima and get stuck in that area, it could not reach the global minima. In contrast, Bayesian Optimization produced better results for minimizing the cost function and finding the global minima. / AFC4Hydro
|
2 |
Planning and scheduling problems in manufacturing systems with high degree of resource degradationAgrawal, Rakshita 07 August 2009 (has links)
The term resource is used to refer to a machine, tool-group, piece of equipment or personnel. Optimization models for resource maintenance are obtained in conjunction with other production related decisions like production planning, production scheduling, resource allocation and job inspection. Emphasis is laid on integrating the above inter-dependent decisions into a unified optimization framework. This is accomplished for large stationary resources, small non-stationary resources with high breaking rate and for resources that form a part of a network.
Owing to large problem size and high uncertainty, the optimal decisions are determined by formulating and solving the above problems as Markov decision processes (MDPs). Approximate dynamic programming based algorithms are used for solving the large optimization problems at hand. The performance of resulting near optimal policies is compared with that of traditional formulations in all cases. The latter treat the resource maintenance decisions independent of other manufacturing related decisions.
In certain formulations, the resource state is not completely observable. This results in a partially observable MDP (POMDP). An alternative algorithm for the solution of POMDP is developed, where several mixed integer linear programs (MILPs) are solved during each ADP iteration. This helps obtain better quality solutions for the POMDPs with very large or continuous action spaces in an efficient manner.
|
3 |
Polymage : Automatic Optimization for Image Processing PipelinesMullapudi, Ravi Teja January 2015 (has links) (PDF)
Image processing pipelines are ubiquitous. Every image captured by a camera and every image uploaded on social networks like Google+or Facebook is processed by a pipeline. Applications in a wide range of domains like computational photography, computer vision and medical imaging use image processing pipelines. Many of these applications demand high-performance which requires effective utilization of modern architectures. Given the proliferation of camera enabled devices and social networks optimizing these emerging workloads has become important both at the data center and the embedded device scales.
An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, sam-pling, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective uti-lization of parallelism available on modern architectures. The traditional options are using optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious.
Inthisthesis,wepresentthedesignandimplementationofPolyMage,adomain-specific language and compiler for image processing pipelines. The focus of the system is on au-tomatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. We achieve such automation with:
• tiling techniques to improve parallelism and locality by introducing redundant computation,
v
a model-driven fusion heuristic which enables a trade-off between locality and re-dundant computations, and anautotuner whichleveragesthefusionheuristictoexploreasmallsubsetofpipeline implementations and find the best performing one.
Our optimization approach primarily relies on the transformation and code generation ca-pabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization fully automatically. We evaluate our framework on a modern multicore system using a set of seven benchmarks which vary widely in structure and complexity. Experimental results show that the performance of pipeline implementations generated by our approach is:
• up to 1.81× better than pipeline implementations manually tuned using Halide, a state-of-the-art language and compiler for image processing pipelines,
• on average 5.39× better than pipeline implementations automatically tuned using Halide and OpenTuner, and
• on average 3.3× better than naive pipeline implementations which only exploit par-allelism without optimizing for locality.
We also demonstrate that the performance of PolyMage generated code is better or compa-rable to implementations using OpenCV, a state-of-the-art image processing and computer vision library.
|
Page generated in 0.0761 seconds