31 |
Une approche par composants pour l'analyse visuelle interactive de résultats issus de simulations numériques / A component-based approach for interactive visual analysis of numerical simulation resultsAit Wakrime, Abderrahim 10 December 2015 (has links)
Les architectures par composants sont de plus en plus étudiées et utilisées pour le développement efficace des applications en génie logiciel. Elles offrent, d’un côté, une architecture claire aux développeurs, et de l’autre, une séparation des différentes parties fonctionnelles et en particulier dans les applications de visualisation scientifique interactives. La modélisation de ces applications doit permettre la description des comportements de chaque composant et les actions globales du système. De plus, les interactions entre composants s’expriment par des schémas de communication qui peuvent être très complexes avec, par exemple, la possibilité de perdre des messages pour gagner en performance. Cette thèse décrit le modèle ComSA (Component-based approach for Scientific Applications) qui est basé sur une approche par composants dédiée aux applications de visualisation scientifique interactive et dynamique formalisée par les réseaux FIFO colorés stricts (sCFN). Les principales contributions de cette thèse sont dans un premier temps, un ensemble d’outils pour modéliser les différents comportements des composants ainsi que les différentes politiques de communication au sein de l’application. Dans un second temps, la définition de propriétés garantissant un démarrage propre de l’application en analysant et détectant les blocages. Cela permet de garantir la vivacité tout au long de l’exécution de l’application. Finalement l’étude de la reconfiguration dynamique des applications d’analyse visuelle par ajout ou suppression à la volée d’un composant sans arrêter toute l’application. Cette reconfiguration permet de minimiser le nombre de services non disponibles. / Component-based approaches are increasingly studied and used for the effective development of the applications in software engineering. They offer, on the one hand, safe architecture to developers, and on the other one, a separation of the various functional parts and particularly in the interactive scientific visualization applications. Modeling such applications enables the behavior description of each component and the global system’s actions. Moreover, the interactions between components are expressed through a communication schemes sometimes very complex with, for example, the possibility to lose messages to enhance performance. This thesis describes ComSA model (Component-based approach for Scientific Applications) that relies on a component-based approach dedicated to interactive and dynamic scientific visualization applications and its formalization in strict Colored FIFO Nets (sCFN). The main contributions of this thesis are, first, the definition of a set of tools to model the component’s behaviors and the various application communication policies. Second, providing some properties on the application to guarantee it starts properly. It is done by analyzing and detecting deadlocks. This ensures the liveness throughout the application execution. Finally, we present dynamic reconfiguration of visual analytics applications by adding or removing on the fly of a component without stopping the whole application. This reconfiguration minimizes the number of unavailable services.
|
32 |
Identifying factors that cause inventory build-ups and how to solve itEriksson, Anders, Music, Anes January 2019 (has links)
Companies have put much focus on production systems to generate and maintain competitiveness which has contributed in less focus to logistics. The material flow is the process after the main processes and has therefore been regarded as “unimportant”. If the material flow fails, there can be consequences such as inventory-build ups or undersupply of material. Lean thinking is one strategy that may be applied to analyze and identify wastes, but the identification of problems has been harder to detect, while the ability to solve them has not improved at the same rate. Following two research questions has been asked to identify how companies should proceed to improve their inventory management but also what factors contribute to the inventory build-ups. · What may cause excess inventory in manufacturing companies? <ul type="disc">How can a manufacturing company reduce WIP´s? The research method is based on a qualitative approach with an interpretivist research methodology to help answer the questions. A case study was done at a manufacturing company to help answer the research questions. The data has been collected by observations trough section B1 and unstructured interviews with both management and operators. The collected data was later compared to the literature according to the inductive reasoning to be able to make suggestions for improvements. The DMAIC tool has been a central point of this research regarding the mapping of the current state and suggestion of a future state. The case study was conducted at Company X AB in the middle of Sweden which is a company that manufactures components and complete solutions. The focus on production has resulted in less focus on the internal logistics. With the low focus on the internal logistics, inventory build-ups have occurred. The results point to the OEE being a contributing factor to the inventory build-up. The availability of both machines was low and therefore caused the OEE to be low. The low availability was caused by long changeovers, staff shortages, and emergency reparations. The conclusions are that Company X must make improvements so that the factors of the low availability decrease in frequency and severity to reduce the work-in-process (WIP). The improvements should be approached with different lean-tools such as SMED, KANBAN, FIFO and 5S.
|
33 |
Studie průběhu zakázky ve výrobním podniku / The Study During the Engagement in the Manufacturing CompanyKubíček, Filip January 2014 (has links)
This master thesis is aimed at optimalization of chosen processes of manufacture of the company DH DEKOR Spol. s r.o. which is engaged in production of impregnated paper and laminated boards. On the basis of the results of processes in storehouse and manufacture of impregnated paper there are given proposals and possible measurments whitch leads to minimalization of time and financal losses.
|
34 |
Evidence a řízení zásob ve vybraném podniku / Evidence and control of inventory in selected companyBESEDOVÁ, Nela January 2010 (has links)
The diploma work deals with stock holding and inventory management in selected company. The main sources of information were specialized liturature and information provided by the company. The objective of this thesis was analyse, evaluation and improvement suggestion of stock holding and inventory management. The theoretic part is focused on stock holding and theory of inventory management. In the practical part is presented the company, process of production and structure of inventory.It has been analysed process of stock holding and inventory management. The company use the program MGF/PRO and system kanban. The diploma work is closed by evaluation and improvement suggestion in inventory management.
|
35 |
Sur des modèles pour l’évaluation de performance des caches dans un réseau cœur et de la consommation d’énergie dans un réseau d’accès sans-fil / On models for performance analysis of a core cache network and power save of a wireless access networkChoungmo Fofack, Nicaise Éric 21 February 2014 (has links)
Internet est un véritable écosystème. Il se développe, évolue et s’adapte aux besoins des utilisateurs en termes de communication, de connectivité et d’ubiquité. Dans la dernière décennie, les modèles de communication ont changé passant des interactions machine-à-machine à un modèle machine-à-contenu. Cependant, différentes technologies sans-fil et de réseaux (tels que les smartphones et les réseaux 3/4G, streaming en ligne des médias, les réseaux sociaux, réseaux-orientés contenus) sont apparues pour améliorer la distribution de l’information. Ce développement a mis en lumière les problèmes liés au passage à l’échelle et à l’efficacité énergétique; d’où la question: Comment concevoir ou optimiser de tels systèmes distribués qui garantissent un accès haut débit aux contenus tout en (i) réduisant la congestion et la consommation d’énergie dans le réseau et (ii) s’adaptant à la demande des utilisateurs dans un contexte connectivité quasi-permanente? Dans cette thèse, nous nous intéressons à deux solutions proposées pour répondre à cette question: le déploiement des réseaux de caches et l’implantation des protocoles économes en énergie. Précisément, nous proposons des modèles analytiques pour la conception de ces réseaux de stockage et la modélisation de la consommation d’énergie dans les réseaux d’accès sans fil. Nos études montrent que la prédiction de la performance des réseaux de caches réels peut être faite avec des erreurs relatives absolues de l’ordre de 1% à 5% et qu’une proportion importante soit 70% à 90% du coût de l’énergie dans les cellules peut être économisée au niveau des stations de base et des mobiles sous des conditions réelles de trafic. / Internet is a real ecosystem. It grows, evolves and adapts to the needs of users in terms of communication, connectivity and ubiquity of users. In the last decade, the communication paradigm has shifted from traditional host-to-host interactions to the recent host-to-content model; while various wireless and networking technologies (such as 3/4G smartphones and networks, online media streaming, social networks, clouds, Big-Data, information-centric networks) emerged to enhance content distribution. This development shed light on scalability and energy efficiency issues which can be formulated as follows. How can we design or optimize such large scale distributed systems in order to achieve and maintain high-speed access to contents while (i) reducing congestion and energy consumption in the network and (ii) adapting to the temporal locality of users demand in a continuous connectivity paradigm? In this thesis we focus on two solutions proposed to answer this question: In-network caching and Power save protocols for scalability and energy efficiency issues respectively. Precisely, we propose analytic models for designing core cache networks and modeling energy consumption in wireless access networks. Our studies show that the prediction of the performance of general core cache networks in real application cases can be done with absolute relative errors of order of 1%–5%; meanwhile, dramatic energy save can be achieved by mobile devices and base stations, e.g., as much as 70%–90% of the energy cost in cells with realistic traffic load and the considered parameter settings.
|
36 |
Optimisation of a pillow production line applying Lean principlesFookes, William January 2010 (has links)
Manufacturing companies throughout the world are interested in reducing the time between a customer placing an order and them receiving the payment for that order. This premise is something that is a central characteristic for the Lean philosophy, and is one of the reasons to apply it. Today manufacturers around the world are embracing Lean techniques in order to reduce waste and increase productivity, and also increase the inventory turns, which reflects in an improvement of cash flow for the company. Nowadays, with all the financial turmoil, every company is looking forward to reduce the inventories, to work with Just in Time supply chains, to develop production systems that reduce the scrap and produce only what is needed, saving space, and freeing up time to work on new design and be at the edge of innovation in order to gain market share and keep improving. This master thesis is focused on implementing the Lean principles in a pillow production line, in order to achieve it, a series of techniques to assess the facility where implemented, which allowed to understand how the facility was working, where is the bottleneck, and to understand the function of it as a system, avoiding to focus on a single point but viewing it as a whole, where each part contributes in a specific and unique way, but where all of them are necessary. Applying Lean principles is a daunting task that takes a long time, a never ending trial and error process, because of this the goal of this study is to develop the bases for a Lean transformation, a schedule for the implementation will be developed and proposed to the company, after analyzing the facility. The study reveals that it is possible to reduce the lead-time of the facility in 60%, and avoid the backorders situation that is present in the company, improving also the service level.
|
37 |
Medžiagų ir produkcijos apskaita ir auditas / Accounting and Audit of Materials and ProductionStrazdienė, Daiva 26 May 2005 (has links)
Research object: stocks. Research subject: accounting and audit. Research aim: to investigate the main problems of stocks accounting and audit and to give suggestions that can help to improve stocks accounting and audit. Objectives: 1)To analyze the peculiarities of stocks and production accounting and audit; 2)To carry out an empirical research of stocks and production accounting and audit; 3)To define and analyze the main problems of stocks and production accounting and audit; 4)To formulate conclusions and suggestions in order to develop the field of stocks accounting and audit; Research methods: logical analysis, synthesis, comparison, questionnaire survey and description. In the process of investigation there were analyzed theory and practice of stocks accounting and audit, investigated the main problems of stocks accounting and audit and also given suggestions that can help to solve investigated problems.
|
38 |
Evidence zásob a její problematika ve vybrané obchodní společnosti s potravinářským zaměřením výroby / Stock recording and related problems in a selected company dealing with food productionWERTHEIMOVÁ, Marie January 2009 (has links)
The diploma thesis titled ``Stock recording and related problems in a selected company dealing with food production{\crq}q analyses the recording, pricing and accounting of stock. The first part of the thesis defines stock and deals with the stock pricing and posting, stocktaking process, mistakes in the stock recording and posting, which can occur in the practice, and it also compares selected sections of stock recording with the International Accounting Standards (IAS/IFRS). The second part of the thesis deals with a particular company and describes its accounting processes. Its stock circulation is demonstrated with a selected sample of purchased goods (material) and a product. The final part of the thesis identifies existing and possible problems related to the stock recording in the company and proposes possible solutions.
|
39 |
Stream Computing on FPGAsPlavec, Franjo 01 September 2010 (has links)
Field Programmable Gate Arrays (FPGAs) are programmable logic devices used for the implementation of a wide range of digital systems. In recent years, there has been an increasing interest in design methodologies that allow high-level design descriptions to be automatically implemented in FPGAs. This thesis describes the design and implementation of a novel compilation flow that implements circuits in FPGAs from a streaming programming language. The streaming language supported is called FPGA Brook, and is based on the existing Brook and GPU Brook languages, which target streaming multiprocessors and graphics processing units (GPUs), respectively. A streaming language is suitable for targeting FPGAs because it allows system designers to express applications in a way that exposes parallelism, which can then be exploited through parallel hardware implementation. FPGA Brook supports replication, which allows the system designer to trade-off area for performance, by specifying the parts of an application that should be implemented as multiple hardware units operating in parallel, to achieve desired application throughput. Hardware units are interconnected through FIFO buffers, which effectively utilize the small memory modules available in FPGAs.
The FPGA Brook design flow uses a source-to-source compiler, and combines it with a commercial behavioural synthesis tool to generate hardware. The source-to-source compiler was developed as a part of this thesis and includes novel algorithms for implementation of complex reductions in FPGAs. The design flow is fully automated and presents a user-interface similar to traditional software compilers. A suite of benchmark applications was developed in FPGA Brook and implemented using our design flow. Experimental results show that applications implemented using our flow achieve much higher throughput than the Nios II soft processor implemented in the same FPGA device. Comparison to the commercial C2H compiler from Altera shows that while simple applications can be effectively implemented using the C2H compiler, complex applications achieve significantly better throughput when implemented by our system. Performance of many applications implemented using our design flow would scale further if a larger FPGA device were used. The thesis demonstrates that using an automated design flow to implement streaming applications in FPGAs is a promising methodology.
|
40 |
Stream Computing on FPGAsPlavec, Franjo 01 September 2010 (has links)
Field Programmable Gate Arrays (FPGAs) are programmable logic devices used for the implementation of a wide range of digital systems. In recent years, there has been an increasing interest in design methodologies that allow high-level design descriptions to be automatically implemented in FPGAs. This thesis describes the design and implementation of a novel compilation flow that implements circuits in FPGAs from a streaming programming language. The streaming language supported is called FPGA Brook, and is based on the existing Brook and GPU Brook languages, which target streaming multiprocessors and graphics processing units (GPUs), respectively. A streaming language is suitable for targeting FPGAs because it allows system designers to express applications in a way that exposes parallelism, which can then be exploited through parallel hardware implementation. FPGA Brook supports replication, which allows the system designer to trade-off area for performance, by specifying the parts of an application that should be implemented as multiple hardware units operating in parallel, to achieve desired application throughput. Hardware units are interconnected through FIFO buffers, which effectively utilize the small memory modules available in FPGAs.
The FPGA Brook design flow uses a source-to-source compiler, and combines it with a commercial behavioural synthesis tool to generate hardware. The source-to-source compiler was developed as a part of this thesis and includes novel algorithms for implementation of complex reductions in FPGAs. The design flow is fully automated and presents a user-interface similar to traditional software compilers. A suite of benchmark applications was developed in FPGA Brook and implemented using our design flow. Experimental results show that applications implemented using our flow achieve much higher throughput than the Nios II soft processor implemented in the same FPGA device. Comparison to the commercial C2H compiler from Altera shows that while simple applications can be effectively implemented using the C2H compiler, complex applications achieve significantly better throughput when implemented by our system. Performance of many applications implemented using our design flow would scale further if a larger FPGA device were used. The thesis demonstrates that using an automated design flow to implement streaming applications in FPGAs is a promising methodology.
|
Page generated in 0.0863 seconds