• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 13
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Enabling Timing Analysis of Complex Embedded Software Systems

Kraft, Johan January 2010 (has links)
Cars, trains, trucks, telecom networks and industrial robots are examples of products relying on complex embedded software systems, running on embedded computers. Such systems may consist of millions of lines of program code developed by hundreds of engineers over many years, often decades. Over the long life-cycle of such systems, the main part of the product development costs is typically not the initial development, but the software maintenance, i.e., improvements and corrections of defects, over the years. Of the maintenance costs, a major cost is the verification of the system after changes has been applied, which often requires a huge amount of testing. However, today's techniques are not sufficient, as defects often are found post-release, by the customers. This area is therefore of high relevance for industry. Complex embedded systems often control machinery where timing is crucial for accuracy and safety. Such systems therefore have important requirements on timing, such as maximum response times. However, when maintaining complex embedded software systems, it is difficult to predict how changes may impact the system's run-time behavior and timing, e.g., response times.Analytical and formal methods for timing analysis exist, but are often hard to apply in practice on complex embedded systems, for several reasons. As a result, the industrial practice in deciding the suitability of a proposed change, with respect to its run-time impact, is to rely on the subjective judgment of experienced developers and architects. This is a risky and inefficient, trial-and-error approach, which may waste large amounts of person-hours on implementing unsuitable software designs, with potential timing- or performance problems. This can generally not be detected at all until late stages of testing, when the updated software system can be tested on system level, under realistic conditions. Even then, it is easy to miss such problems. If products are released containing software with latent timing errors, it may cause huge costs, such as car recalls, or even accidents. Even when such problems are found using testing, they necessitate design changes late in the development project, which cause delays and increases the costs. This thesis presents an approach for impact analysis with respect to run-time behavior such as timing and performance for complex embedded systems. The impact analysis is performed through optimizing simulation, where the simulation models are automatically generated from the system implementation. This approach allows for predicting the consequences of proposed designs, for new or modified features, by prototyping the change in the simulation model on a high level of abstraction, e.g., by increasing the execution time for a particular task. Thereby, designs leading to timing-, performance-, or resource usage problems can be identified early, before implementation, and a late redesigns are thereby avoided, which improves development efficiency and predictability, as well as software quality. The contributions presented in this thesis is within four areas related to simulation-based analysis of complex embedded systems: (1) simulation and simulation optimization techniques, (2) automated model extraction of simulation models from source code, (3) methods for validation of such simulation models and (4) run-time recording techniques for model extraction, impact analysis and model validation purposes. Several tools has been developed during this work, of which two are in commercialization in the spin-off company Percepio AB. Note that the Katana approach, in area (2), is subject for a recent patent application - patent pending. / PROGRESS
52

The Value Of Radio Frequency Identification Technology For Managing Pools Of Returnable Transport Items

Demir, Aysegul 01 September 2010 (has links) (PDF)
Limited asset visibility is a key problem in the management of returnable transport items (RTIs) like reusable containers, pallets and kegs. One tool to increase asset visibility is radio frequency identification (RFID) technology. However, RFID requires high investment cost and intensive efforts for implementation. In this study, we investigate the added value of using RFID technology for the management of the RTI pool in a closed-loop supply chain setting considering both costs and benefits. We have conducted a case study in a company which has recently started an RFID application in its closed-loop supply chain of RTIs. The aim of this case study is to identify and understand how an existing RTI pool is managed and the impact of using RFID technology on the management of such an RTI pool. In order to quantify the added value of RFID technology in RTI pool management, we search for the minimum cost solutions both without and with the use of RFID technology in a problem environment similar to that of our case study using the simulation optimization method. We also analyze the impact of using RFID technology on RTI pool management in terms of several performance measures, including RTI pool size, RTI lifetime, RTI trippage and the cycle time for RTIs to complete one trip in the closed-loop supply chain. In our study, we develop a number of discrete event simulation models of the identified closed-loop supply chain of RTIs operating with our predetermined decision rules for the RTI pool management using the simulation software Arena. We then develop our simulation optimization model in OptQuest for Arena in which the discrete event simulation models are embedded. The results from the simulation optimization method show that the added value of using RFID technology is mostly positive and it depends on the severity of the problematic issues in the closed-loop supply chain, as well as on the extent of improvements that RFID brings about.
53

Contributions à la chaine logistique numérique : conception de circuits courts et planification décentralisée. / Contributions to digital supply chain : design of short and local supply chains and decentralized planning

Ogier, Maxime 05 December 2013 (has links)
Le concept de chaîne logistique numérique regroupe l'ensemble des modèles, méthodes et outils qui permettent de planifier les décisions sur des prototypes numériques de chaîne logistique. Dans ce travail de thèse, nous proposons deux contributions à la chaîne logistique numérique. Nos résultats se destinent en particulier aux réseaux de Petites et Moyennes Entreprises/Industries. D'une part, nous étudions deux nouveaux problèmes liés à la conception de réseaux logistiques en circuits courts et de proximité pour les produits agricoles frais. Pour chacun d'eux nous proposons une formulation en Programme Linéaire à Variables Mixtes. De plus des méthodes de résolution fondées sur des décompositions du modèle nous permettent de résoudre des instances de grande taille. Pour chaque problème, cette approche est mise en œuvre sur une étude de cas menée avec plusieurs collectivités territoriales. D'autre part, nous étudions le problème de planification tactique des activités de production, de transport et de stockage. Contrairement aux approches classiques centralisées, nous considérons que les décisions des différents acteurs sont prises de manière décentralisée. Nous étudions la manière de décomposer les décisions entre les acteurs ainsi que leurs comportements individuels. Nous analysons aussi des protocoles de concertation basés sur un échange limité d'informations. Afin de répondre à la double complexité du problème, nous proposons un outil innovant qui couple une simulation à base de multi-agents à des approches d'optimisation par programmation mathématique. / The concept of digital supply chain gathers models, methods and tools to plan decisions on digital prototypes of supply chains. This doctoral dissertation proposes two contributions to digital supply chain. Mainly, our results address small and medium enterprises/industries. Firstly, we study two new problems related to service network design for short and local fresh food supply chains. For each of them we propose a Mixed Integer Linear Programming formulation. Decomposition-based methods are implemented in order to solve large scale instances. For each problem this approach is applied on a case study conducted with several local institutions. Secondly, we address the tactical supply chain planning problem: how to plan production, transportation and storage activities. As opposed to the classic centralized version, the decision making process is considered decentralized. We study how to decompose the decisions between actors as well as their individual behaviour. We also analyze negotiation processes based on limited information sharing. In order to address the double complexity of the problem, we propose an innovative tool coupling a multi-agent based simulation approach with optimization approaches based on mathematical programming.
54

Métodos de simulação-otimização e análise de decisão multi-critério aplicados ao dimensionamento de sistemas logísticos complexos. / Simulation-optimization and multi-criteria decision analysis applied to complex logistics systems.

Edson Felipe Capovilla Trevisan 16 September 2013 (has links)
O estudo de sistemas logísticos envolve a concatenação de elementos estratégicos e operacionais, comumente compondo sistemas com múltiplas facetas, objetivos antagônicos e grande número de alternativas. Nesse contexto, o presente trabalho discute a utilização de análise de decisão multicritério (MCDA), simulação de eventos discretos (SED) e otimização para simulação. A metodologia MCDA captura, mensura e pondera os objetivos e valores dos tomadores de decisão. Por sua vez, a SED representa o sistema estudado com alto nível de detalhamento, permitindo a avaliação de diversas configurações do sistema. Por fim, métodos de otimização para simulação possibilitam a busca e comparação de alternativas mais eficientes. As três metodologias são avaliadas, identificando suas vantagens, desvantagens e complementaridades quando aplicadas a sistemas logísticos. Através da aplicação de um estudo de caso sobre o dimensionamento de um sistema de transporte, constatou-se que: a) a SED incorporou detalhes importantes para a avaliação mais precisa de vários indicadores de desempenho b) a metodologia MCDA possibilitou a captura de vários objetivos e valores, propiciando a realização de tradeoffs robustos; c) um método de busca exaustiva e técnicas de redução de variância permitiram a comparação das alternativas em tempos computacionais reduzidos. Por fim, conclui-se que a metodologia híbrida apresentada expande o potencial de aplicação da SED em sistemas logísticos complexos. / A logistic system study involves strategic and operational elements, commonly composing multi-faceted systems with antagonistic goals and large number of alternatives. In this context, this thesis discusses the use of multi-criteria decision analysis (MCDA), discrete event simulation (DES) and optimization for simulation. The MCDA methodology captures, measures and weighs the goals and values of decision makers. DES is useful for representing systems with high level of detail, allowing the evaluation of several system configurations. Finally, optimization for simulation procedures are useful for searching and comparing more efficient alternatives. These three methodologies are assessed and their advantages, disadvantages, and complementarities are identified for logistics systems applications. Through a case study of a transportation system, we conclude that: a) the SED incorporated important details for more precise evaluation of various performance indicators b) the MCDA methodology was useful to capture several goals and values, so that robust tradeoffs could be carried out c) an exhaustive search routine and variance reduction techniques allowed the comparison of several alternatives in feasible computational times. Finally, we conclude that the presented hybrid methodology expands the application of DES to complex logistics systems.
55

The Fleet-Sizing-and-Allocation Problem: Models and Solution Approaches

El-Ashry, Moustafa 23 November 2007 (has links)
Transportation is one of the most vital services in modern society. It makes most of the other functions of society possible. Real transportation systems are so large and complex that in order to build the science of transportation systems it will be necessary to work in many areas, such as: Modeling, Optimization and Simulation. We are interested in solutions for the so-called fleet-sizing-and-allocation problem (FSAP). Fleet sizing and allocation problems are one of the most interesting and hard to solve logistic problems. A fleet sizing and allocation problem consists of two interdependent parts. The fleet sizing problem is to determine a number of transportation units that optimally balances service requirements against the cost of purchasing and maintaining the transportation units. The allocation problem is dealing with the repositioning of transportation units to serve future transportation demand. To make the fleet sizing and allocation problem a little bit more tractable we concentrate on logistic systems with a special hub-and-spoke structure. We start with a very simple fleet sizing of one-to-one case. This case will cause us to focus attention on several key issues in fleet sizing. Afterwards, the generalization of the one-to-one system is the one-to-many system. As a simple example can serve the continuous time situation where a single origin delivers items to many destinations. For the case that items are produced in a deterministic production cycle and transportation times are stochastic. We also studied a hub-and-spoke problem with continuous time and stochastic demand. To solve this problem, based on Marginal Analysis, we applied queueing theory methods. The investigation of the fleet-sizing-and-allocation problem for hub-and-spoke systems is started for a single-period, deterministic-demand model. In that the model hub has to decide how to use a given number of TU’s to satisfy a known (deterministic) demand in the spokes. We consider two cases: 1. Renting of additional TU’s from outside the system is not possible, 2. Renting of additional TU’s from outside the system is possible. For each case, based on Marginal Analysis, we developed a simple algorithm, which gives us the cost-minimal allocation. Since the multi-period, deterministic demand problem is NP-hard we suggest to use Genetic Algorithms. Some building elements for these are described. For the most general situation we also suggest to use simulation optimization. To realize the simulation optimization approach we could use the software tool “Calculation Assessment Optimization System” (CAOS). The idea of CAOS is to provide a software system, which separates the optimization process from the optimization problem. To solve an optimization problem the user of CAOS has to build up a model of the system to which the problem is related. Furthermore he has to define the decision parameters and their domain. Finally, we used CAOS for two classes of hub-and-spoke system: 1. A single hub with four spokes, 2. A single hub with fifty spokes. We applied four optimizers – a Genetic Algorithm, Tabu Search, Hybrid Parallel and Hybrid Serial with two distributions (Normal Distribution and Exponential Distribution) for a customer interarrival times and their demand.
56

Simulation-based optimization of geometry and motion of a vertical tubular bag machine

Frank, Matthias, Holzweißig, Johann January 2016 (has links)
For food industry processes packaging machines with high throughput are required and one way to improve the overall machine efficiency is to increase its working speed. However, testing of prototypes is time and cost expensive. Therefore, simulation is used to evaluate the process and adapt it. Optimization can help to find better machine designs by using simulations to evaluate one solution. This work uses the Discrete Element Method to model a vertical tubular bag machine for packaging basmati rice. The Covariance Matrix Adaption Evolution Strategy optimizes the simulation model and results in a significant machine speedup. This work is a guidance to adapt this method for similar problems.
57

Optimization and performance of grinding circuits: the case of Buzwagi Gold Mine (BGM)

Wikedzi, Alphonce Wendelin 03 April 2018 (has links)
Buzwagi Gold Mine (BGM) is operated by Acacia Mining and located in the Lake Victoria Goldfields of central Tanzania. The mine commenced its operation since April 2009 and treats a sulphide copper-gold ore to produce gold in form of doré bars and a concentrate containing gold, copper and silver. The BGM comminution circuit includes a primary crushing stage with a gyratory crusher and a two grinding circuits using a Semi-Autogenous Grinding (SAG) mill and a ball mill. The SAG mill circuit also includes a single-deck screen and a cone crusher while the ball mill circuit utilizes hydrocyclones. Currently, the grinding circuits are inefficient in achieving the aspired product fineness of xP,80 = 125 μm even at low to normal throughputs (450-600 t/h). An evaluation and optimization study of the circuit performance was conducted to improve the product fineness through circuit surveys, experimental lab work and simulations. In three full scale sampling campaigns, size distributions and solids contents of the samples were determined at selected points in the circuit. Further, several types of breakage tests were conducted; standard Bond tests to determine ore grindability and work indices, batch grinding tests to determine parameters for breakage and selection functions , and standard ball mill tests for mineral liberation characterization by an automated mineral liberation analyzer (MLA).The tests were conducted in a size range from 0.063 to 2 mm. Then, mass balance of the circuit was calculated and the models for mills, screens and hydrocyclones were employed in MODSIM (version 3.6.24). Firstly, simulations were conducted to optimize the existing plant. Several options were evaluated such as reduction of SAG screen aperture, adjustment of cyclone feed solids content and reduction of vortex finder and apex diameters. Moreover, simulations were also evaluated for a possible modification of the existing circuit and include; partial splitting of the cyclone underflow back to SAG mill, introduction of a second classification stage as well as introduction of a second ball mill. The evaluation of breakage tests and survey data revealed the following; the Bond work index obtained for the current ore ranges between 17.20 - 18.70 kWh/t compared to 14.50 - 16.50 kWh/t which was estimated during plant design.This indicates a change in hardness of the ore during the last 7 years. Harder ore means more energy requirement for an efficient operation, the consequence of which is increased costs. Thus, a periodic review of the ore hardness for ongoing mining operation is recommended. This will help in establishing better blends as well as prediction of appropriate tonnages for the existing ore types, so as to be efficiently treated by the available plant design. The work indices of the ore blends treated during survey were correlated with their quartz content and showed a strong linear relationship (R2= 0.95). Therefore, the work index for the BGM ore could be predicted based on known quartz content of the material. Further, the model could be used as a control tool for monitoring hardness variation of the SAG mill feed. The mineral liberation studies indicated that the valuable phase (pyrite-pyrrhotite) could be liberated at relatively coarser particle sizes (200-400 µm). This implies that, there could be no problem with the efficiency of the gravity circuit for the BGM operation, where the gold contained in pyrite-pyrrhotite could be easily concentrated. However, the efficiency of flotation and cyanidation processes will still require finer feed. In overall, the liberation characteristics of the ore blends treated during survey showed minor differences. The Bond efficiency factors of 48-61 % were obtained for the BGM grinding circuit, indicating an inefficient operation. This suggests that the operation could achieve targets by lowering the throughput. Further, the SAG mill circuit was characterized by fluctuating feed size of between xF,80 =102 to 185 mm. A need for control of the feed size as well as blending ratios was recommended for an efficient operation in terms of throughput and final product size. This could be achieved through closer monitoring of the primary crusher performance and proper control of the ratios for the SAG mill feeders drawing the ore from the stockpile. The ball mill grinding efficiency was poor and could be indicated by the fraction < 125 µm of only 5-9 % or xP, 80 : >400 µm in the mill discharge. This was deemed due to poor hydrocyclone performance which was characterized by higher feed solids content, coarser overflow xP,80: >200 µm as well as cut sizes, xT : > 200 µm. An improvement of product fineness up to 327 µm could be achieved during the simulation and optimization of the existing design. This could be achieved by modification of the operating conditions such as reduction of SAG screen aperture from 12 mm to 10 mm, reduction of vortex finder from 280 mm to 270.3 mm, reduction of apex diameter from 150 mm to 145.6 mm as well as adjustment of the cyclone feed solids content from 66.7 to 67.1 %. Based on this result, it was concluded that the current equipment could not achieve the target product quality (i.e. xP,80 = 125 µm ). Further simulations based on flowsheet modification options showed that a second ball mill (series configuration) can help to achieve the desired product fineness as well as an increase of throughput from 618 t/h to 780 t/h. Although the circulating load increases to approximately 500 % in this configuration, it is outweighed by the benefits. Importantly, this option is cost intensive and hence may be considered as a long term solution and especially after cost-benefit analysis. Finally, the results based on optimization of the existing design is recommended as short term solution for improvement of the BGM operation. Although the fineness achieved is still low (i.e. xP,80 = 327 µm) compared to the target (i.e. xP,80 = 125 µm), this gives additional advantage in the sense that, also better hydrocyclone performance is achieved in terms of overflow product (xP,80 = 105 µm vs. > 240 µm) , cut size (xT =133.1 µm vs. > 220 µm) and circulating load (CL =350 %). The improved overflow fineness will contribute to improved efficiency for the downstream processes.
58

Optimizing hydraulic reservoirs using euler-eulerlagrange multiphase cfd simulation

Muttenthaler, Lukas, Manhartsgruber, Bernhard 25 June 2020 (has links)
Well working hydraulic systems need clean hydraulic oil. Therefore, the system must ensure the separation of molecular, gaseous, liquid and solid contaminations. The key element of the separation of contaminants is the hydraulic reservoir. Solid particles are a major source of maintenance costs and machine downtime. Thus, an Euler-Euler-Lagrange multiphase CFD model to predict the transport of solid particles in hydraulic reservoirs was developed. The CFD model identifies and predicts the particle accumulation areas and is used to train port-to-port transfer functions, which can be used in system models to simulate the long-term contamination levels of hydraulic systems. The experimental detection of dynamic particle contamination levels and particle accumulation areas validate and confirm the CFD and the system model. Both models in combination allow for parameter and design studies to improve the fluid management of hydraulic reservoirs.
59

Convergent and Efficient Methods to Optimize Deep Learning

Mashayekhi, Mehdi 29 September 2022 (has links)
No description available.
60

模擬最適化運用於資產配置之驗證 / The Effectiveness of the Asset Allocation Using the Technique of Simulation Optimization

劉婉玉 Unknown Date (has links)
本文利用模擬最適化(Simulation Optimization)的技術,來找出適合投資人之最佳資產配置。模擬最適化係為一種將決策變數輸入而使其反應變數得到最佳化結果之技術,在本篇中,決策變數為各種投資標的之資產配置,而反應變數則為投資結果之預期報酬與標準差,模擬最適化可視為一種在可行範圍內尋求最佳解之過程。本篇中模擬最適化之方法係採演化策略法,最適化問題則為具放空限制之多期架構。我們亦進一步與各種傳統的投資保險策略比較,包括買入持有策略(Buy-and-Hold)、固定比例策略(Constant Mix)、固定比例投資保險策略(Constant Proportion Portfolio Insurance)及時間不變性投資組合保險策略(Time-Invariant Portfolio Protection),以驗證模擬最適化的有效性,並以多種評估指標來衡量各種策略績效之優劣。 由實證結果發現,利用模擬最適化求解出每月的最適資產配置,雖然造成每期因資金配置比例變動而提高波動性,另一方面卻能大幅的增加報酬率。整體而言,模擬最適化技術的確能夠有效提升投資績效,使得最終財富增加,並且得到較大的夏普指數及每單位風險下較高的報酬。 / This paper applied simulation optimization technique to search for the optimal asset allocation. Simulation optimization is the process of determining the values of the decision variables that optimize the values of the stochastic response variable generated from a simulation model. The decision variables in our case are the allocations of many kinds of assets. The response variable is a function of the expected wealth and the associated risk. The simulation optimization problem can be characterized as a stochastic search over a feasible exploration region. The method we applied is the evolution strategies and the optimization problem is formulated as a multi-period one with short-sale constraints. In order to verify the effectiveness of simulation optimization, we compared the resulting asset allocation with allocations obtaining using traditional portfolio insurance strategies including Buy-and-Hold, Constant Mix, Constant Proportion Portfolio Insurance, and Time-Invariant Portfolio Protection. We also used many indexes to evaluate performance of all kinds of strategies in this paper. Our empirical results indicated that using simulation optimization to search for the best asset allocation resulted in large volatilities, however, it significantly enhanced rate of return. As a whole, applying simulation optimization indeed gets the better performance, increases the final wealth, makes Sharpe Index large, and obtains the higher return under per unit risk.

Page generated in 0.1349 seconds