• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 24
  • 21
  • 18
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 493
  • 493
  • 122
  • 106
  • 99
  • 88
  • 73
  • 67
  • 62
  • 56
  • 53
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Metody technické prognostiky aplikovatelné v embedded systémech / Methods of Technical Prognostics Applicable to Embedded Systems

Krupa, Miroslav January 2012 (has links)
Hlavní cílem dizertace je poskytnutí uceleného pohledu na problematiku technické prognostiky, která nachází uplatnění v tzv. prediktivní údržbě založené na trvalém monitorování zařízení a odhadu úrovně degradace systému či jeho zbývající životnosti a to zejména v oblasti komplexních zařízení a strojů. V současnosti je technická diagnostika poměrně dobře zmapovaná a reálně nasazená na rozdíl od technické prognostiky, která je stále rozvíjejícím se oborem, který ovšem postrádá větší množství reálných aplikaci a navíc ne všechny metody jsou dostatečně přesné a aplikovatelné pro embedded systémy. Dizertační práce přináší přehled základních metod použitelných pro účely predikce zbývající užitné životnosti, jsou zde popsány metriky pomocí, kterých je možné jednotlivé přístupy porovnávat ať už z pohledu přesnosti, ale také i z pohledu výpočetní náročnosti. Jedno z dizertačních jader tvoří doporučení a postup pro výběr vhodné prognostické metody s ohledem na prognostická kritéria. Dalším dizertačním jádrem je představení tzv. částicového filtrovaní (particle filtering) vhodné pro model-based prognostiku s ověřením jejich implementace a porovnáním. Hlavní dizertační jádro reprezentuje případovou studii pro velmi aktuální téma prognostiky Li-Ion baterii s ohledem na trvalé monitorování. Případová studie demonstruje proces prognostiky založené na modelu a srovnává možné přístupy jednak pro odhad doby před vybitím baterie, ale také sleduje možné vlivy na degradaci baterie. Součástí práce je základní ověření modelu Li-Ion baterie a návrh prognostického procesu.
472

A PROBABILISTIC APPROACH TO UNCERTAINTY IN TURBINE EFFICIENCY MEASUREMENT

Lakshya Bhatnagar (5930546) 20 June 2022 (has links)
<p> Efficiency is an essential metric for assessing turbine performance. Modern turbines rely heavily on numerical computational fluid dynamic (CFD) tools for design improvement. With more compact turbines leading to lower aspect ratio airfoils, the influence of secondary flows is significant on performance. Secondary flows and detached flows, in general, remain a challenge for commercial CFD solvers; hence, there is a need for high fidelity experimental data to tune these solvers used by turbine designers. Efficiency measurements in engine-representative test rigs are challenging for multiple reasons; an inherent problem to any experiment is to remove the effects specific to the turbine rig. This problem is compounded by the narrow uncertainty band required, ideally less than 0.5% uncertainty, to detect the incremental improvements achieved by turbine designers.  Efficiency measurements carried out in engine-representative turbine rigs have traditionally relied upon strong assumptions, such as neglecting heat transfer effects. Furthermore, prior to this research there was no framework to compute uncertainty propagation that combines both inputs from experiments and computational tools. </p> <p>This dissertation presents a comprehensive methodology to obtain high-fidelity adiabatic efficiency data in engine-representative turbine facilities. This dissertation presents probabilistic sampling techniques to allow for uncertainty propagation. The effect of rig-specific effects such as heat transfer and gas properties, on efficiency is demonstrated. Sources of uncertainty are identified, and a framework is presented which divides the sources into bias and stochastic. The framework allows the combination of experimental and numerical uncertainty. The accuracy of temperature and aerodynamic pressure probes, used for efficiency determination, is quantified. Corrections for those effects are presented that rely on hybrid numerical and experimental methods. Uncertainty is propagated through these methods using numerical sampling. </p> <p>Finally, two test cases are presented, a stator vane in an annular cascade and a two-stage turbine in a rotating rig. The performance is analyzed using the methods and corrections developed. The uncertainty on the measured efficiency is similar to literature but the uncertainty framework allows an uncertainty estimate on the adiabatic efficiency. </p>
473

Evaluation of model-based fault diagnosis combining physical insights and neural networks applied to an exhaust gas treatment system case study

Kleman, Björn, Lindgren, Henrik January 2021 (has links)
Fault diagnosis can be used to early detect faults in a technical system, which means that workshop service can be planned before a component is fully degraded. Fault diagnosis helps with avoiding downtime, accidents and can be used to reduce emissions for certain applications. Traditionally, however, diagnosis systems have been designed using ad hoc methods and a lot of system knowledge. Model-based diagnosis is a systematic way of designing diagnosis systems that is modular and offers high performance. A model-based diagnosis system can be designed by making use of mathematical models that are otherwise used for simulation and control applications. A downside of model-based diagnosis is the modeling effort needed when no accurate models are available, which can take a large amount of time. This has motivated the use of data-driven diagnosis. Data-driven methods do not require as much system knowledge and modeling effort though they require large amounts of data and data from faults that can be hard to gather. Hybrid fault diagnosis methods combining models and training data can take advantage of both approaches decreasing the amount of time needed for modeling and does not require data from faults. In this thesis work a combined data-driven and model-based fault diagnosis system has been developed and evaluated for the exhaust treatment system in a heavy-duty diesel engine truck. The diagnosis system combines physical insights and neural networks to detect and isolate faults for the exhaust treatment system. This diagnosis system is compared with another system developed during this thesis using only model-based methods. Experiments have been done by using data from a heavy-duty truck from Scania. The results show the effectiveness of both methods in an industrial setting. It is shown how model-based approaches can be used to improve diagnostic performance. The hybrid method is showed to be an efficient way of developing a diagnosis system. Some downsides are highlighted such as the performance of the system developed using data-driven and model-based methods depending on the quality of the training data. Future work regarding the modularity and transferability of the hybrid method can be done for further evaluation.
474

Prognostics for Condition Based Maintenance of Electrical Control Units Using On-Board Sensors and Machine Learning

Fredriksson, Gabriel January 2022 (has links)
In this thesis it has been studied how operational and workshop data can be used to improve the handling of field quality (FQ) issues for electronic units. This was done by analysing how failure rates can be predicted, how failure mechanisms can be detected and how data-based lifetime models could be developed. The work has been done on an electronic control unit (ECU) that has been subject to a field quality (FQ) issue, determining thermomechanical stress on the solder joints of the BGAs (Ball Grid Array) on the PCBAs (Printed circuit board assembly) to be the main cause of failure. The project is divided into two parts. Part one, "PCBA" where a laboratory study on the effects of thermomechanical cycling on solder joints for different electrical components of the PCBAs are investigated. The second part, "ECU" is the main part of the project investigating data-driven solutions using operational and workshop history data. The results from part one show that the Weibull distribution commonly used to predict lifetimes of electrical components, work well to describe the laboratory results but also that non parametric methods such as kernel distribution can give good results. In part two when Weibull together with Gamma and Normal distributions were tested on the real ECU (electronic control unit) data, it is shown that none of them describe the data well. However, when random forest is used to develop data-based models most of the ECU lifetimes of a separate test dataset can be correctly predicted within a half a year margin. Further using random survival forest it was possible to produce a model with just 0.06 in (OOB) prediction error. This shows that machine learning methods could potentially be used in the purpose of condition based maintenance for ECUs.
475

Efficient placement design and storage cost saving for big data workflow in cloud datacenters / Conception d'algorithmes de placement efficaces et économie des coûts de stockage pour les workflows du big data dans les centres de calcul de type cloud

Ikken, Sonia 14 December 2017 (has links)
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job / The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
476

Zobrazování komplexních 3D scén / Rendering Complex 3D Scenes

Mrkvička, Tomáš January 2008 (has links)
This thesis deals with representation of large and complex 3D scenes which are usually used by modern computer games. Main aim is design and implementation of data driven rendering system. Proper rendering is directed (driven) by scene description. This description is also designed with respect to scene creators whose typically do not have deep knowledge of programming languages in contrast to game programming developers. First part is focused on design of efficient scene description and its possible applications at scene rendering. Second part is focused on proper system implementation. Finally, consequently important system optimizations are mentioned too.
477

The Right Price - At What Cost? : A Multi-industry Approach in the Context of Technological Advancement / Rätt Pris - Till Vilken Kostnad?

Leijon, Anna Mikaelsdotter January 2017 (has links)
The business climate is undergoing a transformation and managers are faced with several challenges, not the least of which is related to pricing strategy. With an increased transparency in the market as well as anincreased competitive pressure, and with more sophisticated and well-informed consumers, retail businesses find it hard to navigate the pricing jungle. At the same time, the conventional wisdom in the field of pricing and the theoretical models on the topic, originate from a time long before the digitalization. Old models are not a problem in itself, but when there are new forces in the pricing ecosystem, driven by technological advancement, an assessment of the incumbent models is in the best interest of both businesses and academia. The reason for this is that, the use of old models that rely on inaccurate assumptions may impact businesses’ prioritizing of resources or their overall business strategy. In addition, researchers might be distracted and the research field disrupted. Thus, the purpose of this study is to discuss whether or not there are additional dimensions in pricing strategy that are not covered by the incumbent pricing models. Here, dimensions refer to the key components of businesses’ strategic decision making in regards to pricing. This thesis examines pricing models in today’s business context in order to answer the research question: “Are there additional dimensions of the empirical reality of pricing strategy that are not covered by the incumbent pricing models?” The research question has been studied qualitatively through a literature review, a pilot study and twelve case studies, where the pilot study had the purpose of exploring the depth, whereas the multiple case studies focused on the breadth, of pricing strategies. The case studies cover businesses in different retail industries and of different sizes, namely the industries of Clothing &amp; Accessories, Daily Goods, Furniture and Toys &amp; Tools, and of the following sizes: micro, small, medium and large. The empirical data has mainly been gathered by conducting interviews with production, sales and management personnel at the case businesses. The data has been structured, reduced and analysed with the help of a framework of analysis that has been developed throughout the pilot study. The results of this study lean on previous research and a main divider in pricing strategies has been identified as businesses use either a data-driven or an intuition-driven approach in their strategic work with pricing. As such, it is proposed that the division of pricing strategies need to be acknowledged, since the separate methodological approaches may lead to different results, while implying different costs, resources and required knowledge. Furthermore, the division may form a basis for competitive advantage, be extended to other areas of strategic management and become clearer, since the adoption of technology and its impact will increase in the future. As a result, in the future of pricing, they key is going to be to account for both the strategic perspectives and the methodological approaches in the strategic decision making process of pricing. / Affärsklimatet genomgår en omvandling och företagsledare står inför flera utmaningar, inte minst utmaningar som är relaterade till prissättningsstrategi. Med en alltmer transparent marknad och en ökad konkurrens företag emellan samt en mer sofistikerad och välinformerad konsument, finner företagen i detaljhandeln det svårt att navigera i prissättningsdjungeln. Samtidigt härrör den konventionella visdomen inom prissättning och de teoretiska modellerna på samma ämne från en tid långt innan digitaliseringen. Gamla modeller är inte ett problem i sig, men när det finns nya krafter i prissättningens ekosystem, som drivs på av teknologisk utveckling, är en omprövning av de befintliga modellerna i både företag och akademikers intresse. Användningen av gamla modeller som bygger på felaktiga antaganden kan dock inverka på företagens prioritering av resurser eller på deras övergripande affärsstrategi. Dessutom kan forskare distraheras och forskningsfältet störas. Syftet med denna studie är således att diskutera huruvida det finns ytterligare dimensioner i prissättningsstrategi som inte omfattas av de befintliga prissättningsmodellerna. Här avser dimensioner nyckelkomponenter i företagens strategiska beslutsfattande när det gäller prissättning. Denna avhandling undersöker prissättningsmodellerna i dagens affärssammanhang för att svara på frågan: "Finns det ytterligare dimensioner av den empiriska verkligheten av prissättningsstrategi som inte omfattas av de befintliga prissättningsmodellerna?" Forskningsfrågan har studerats kvalitativt genom en litteraturgranskning, en pilotstudie och tolv fallstudier, där pilotstudien hade till syfte att utforska djupet, medan de flera fallstudierna inriktades på bredden, av prissättningsstrategier. Fallstudierna omfattar företag i industrin för detaljhandeln och företag av olika storlekar, nämligen inom detaljhandeln för Kläder &amp; Accessoarer, Dagligvaror, Möbler och Leksaker &amp; Verktyg, och av följande storlekar: mikro, små, medelstora och stora. Den empiriska datan har huvudsakligen insamlats med hjälp av intervjuer med produktions- och försäljningspersonal samt företagsledare hos företagen i fallstudierna. Uppgifterna har strukturerats, reducerats och analyserats med hjälp av en analysram som har utvecklats under pilotstudien. Resultaten av denna studie tar rygg på tidigare forskning och en huvuddelare i prissättningsstrategier har identifierats, eftersom företag använder antingen ett data-drivet eller ett intuition-drivet tillvägagångssätt i sitt strategiska arbete med prissättning. Som sådan föreslås att uppdelningen av prissättningsstrategier måste tas hänsyn till eftersom de separata metodologiska metoderna kan leda till olika resultat, samtidigt som de innebär olika kostnader samt kräver olika resurser och nödvändig förkunskap. Dessutom kan uppdelningen ligga till grund för konkurrensfördel, utvidgas till andra strategiska områden för företagsledare och bli tydligare, eftersom teknikens utbredning och påverkan kommer att öka i framtiden. Som en följd av detta kommer nyckeln i framtidens strategiska prissättning att vara att ta hänsyn till både de strategiska perspektiven och de metodologiska metoderna i den strategiska beslutsprocessen för prissättning.
478

A COMPREHENSIVE UNDERWATER DOCKING APPROACH THROUGH EFFICIENT DETECTION AND STATION KEEPING WITH LEARNING-BASED TECHNIQUES

Jalil Francisco Chavez Galaviz (17435388) 11 December 2023 (has links)
<p dir="ltr">The growing movement toward sustainable use of ocean resources is driven by the pressing need to alleviate environmental and human stressors on the planet and its oceans. From monitoring the food web to supporting sustainable fisheries and observing environmental shifts to protect against the effects of climate change, ocean observations significantly impact the Blue Economy. Acknowledging the critical role of Autonomous Underwater Vehicles (AUVs) in achieving persistent ocean exploration, this research addresses challenges focusing on the limited energy and storage capacity of AUVs, introducing a comprehensive underwater docking solution with a specific emphasis on enhancing the terminal homing phase through innovative vision algorithms leveraging neural networks.</p><p dir="ltr">The primary goal of this work is to establish a docking procedure that is failure-tolerant, scalable, and systematically validated across diverse environmental conditions. To fulfill this objective, a robust dock detection mechanism has been developed that ensures the resilience of the docking procedure through \comment{an} improved detection in different challenging environmental conditions. Additionally, the study addresses the prevalent issue of data sparsity in the marine domain by artificially generating data using CycleGAN and Artistic Style Transfer. These approaches effectively provide sufficient data for the docking detection algorithm, improving the localization of the docking station.</p><p dir="ltr">Furthermore, this work introduces methods to compress the learned docking detection model without compromising performance, enhancing the efficiency of the overall system. Alongside these advancements, a station-keeping algorithm is presented, enabling the mobile docking station to maintain position and heading while awaiting the arrival of the AUV. To leverage the sensors onboard and to take advantage of the computational resources to their fullest extent, this research has demonstrated the feasibility of simultaneously learning docking detection and marine wildlife classification through multi-task and transfer learning. This multifaceted approach not only tackles the limitations of AUVs' energy and storage capacity but also contributes to the robustness, scalability, and systematic validation of underwater docking procedures, aligning with the broader goals of sustainable ocean exploration and the blue economy.</p>
479

Data-driven Interpolation Methods Applied to Antenna System Responses : Implementation of and Benchmarking / Datadrivna interpolationsmetoder applicerade på systemsvar från antenner : Implementering av och prestandajämförelse

Åkerstedt, Lucas January 2023 (has links)
With the advances in the telecommunications industry, there is a need to solve the in-band full-duplex (IBFD) problem for antenna systems. One premise for solving the IBFD problem is to have strong isolation between transmitter and receiver antennas in an antenna system. To increase isolation, antenna engineers are dependent on simulation software to calculate the isolation between the antennas, i.e., the mutual coupling. Full-wave simulations that accurately calculate the mutual coupling between antennas are timeconsuming, and there is a need to reduce the required time. In this thesis, we investigate how implemented data-driven interpolation methods can be used to reduce the simulation times when applied to frequency domain solvers. Here, we benchmark the four different interpolation methods vector fitting, the Loewner framework, Cauchy interpolation, and a modified version of Nevanlinna-Pick interpolation. These four interpolation methods are benchmarked on seven different antenna frequency responses, to investigate their performance in terms of how many interpolation points they require to reach a certain root mean squared error (RMSE) tolerance. We also benchmark different frequency sampling algorithms together with the interpolation methods. Here, we have predetermined frequency sampling algorithms such as linear frequency sampling distribution, and Chebyshevbased frequency sampling distributions. We also benchmark two kinds of adaptive frequency sampling algorithms. The first type is compatible with all of the four interpolation methods, and it selects the next frequency sample by analyzing the dynamics of the previously generated interpolant. The second adaptive frequency sampling algorithm is solely for the modified NevanlinnaPick interpolation method, and it is based on the free parameter in NevanlinnaPick interpolation. From the benchmark results, two interpolation methods successfully decrease the RMSE as a function of the number of interpolation points used, namely, vector fitting and the Loewner framework. Here, the Loewner framework performs slightly better than vector fitting. The benchmark results also show that vector fitting is less dependent on which frequency sampling algorithm is used, while the Loewner framework is more dependent on the frequency sampling algorithm. For the Loewner framework, Chebyshev-based frequency sampling distributions proved to yield the best performance. / Med de snabba utvecklingarna i telekomindustrin så har det uppstått ett behov av att lösa det så kallad i-band full-duplex (IBFD) problemet. En premiss för att lösa IBFD-problemet är att framgångsrikt isolera transmissionsantennen från mottagarantennen inom ett antennsystem. För att öka isolationen mellan antennerna måste antenningenjörer använda sig av simulationsmjukvara för att beräkna isoleringen (den ömsesidiga kopplingen mellan antennerna). Full-wave-simuleringar som noggrant beräknar den ömsesidga kopplingen är tidskrävande. Det finns därför ett behov av att minska simulationstiderna. I denna avhandling kommer vi att undersöka hur våra implementerade och datadrivna interpoleringsmetoder kan vara till hjälp för att minska de tidskrävande simuleringstiderna, när de används på frekvensdomänslösare. Här prestandajämför vi de fyra interpoleringsmetoderna vector fitting, Loewner ramverket, Cauchy interpolering, och modifierad Nevanlinna-Pick interpolering. Dessa fyra interpoleringsmetoder är prestandajämförda på sju olika antennsystemsvar, med avseende på hur många interpoleringspunkter de behöver för att nå en viss root mean squared error (RMSE)-tolerans. Vi prestandajämför också olika frekvenssamplingsalgoritmer tillsammas med interpoleringsmetoderna. Här använder vi oss av förbestämda frekvenssamplingsdistributioner så som linjär samplingsdistribution och Chebyshevbaserade samplingsdistributioner. Vi använder oss också av två olika sorters adaptiv frekvenssamplingsalgoritmer. Den första sortens adaptiv frekvenssamplingsalgoritm är kompatibel med alla de fyra interpoleringsmetoderna, och den väljer nästa frekvenspunkt genom att analysera den föregående interpolantens dynamik. Den andra adaptiva frekvenssamplingsalgoritmen är enbart till den modifierade Nevanlinna-Pick interpoleringsalgoritmen, och den baserar sitt val av nästa frekvenspunkt genom att använda sig av den fria parametern i Nevanlinna-Pick interpolering. Från resultaten av prestandajämförelsen ser vi att två interpoleringsmetoder framgångsrikt lyckas minska medelvärdetsfelet som en funktion av antalet interpoleringspunkter som används. Dessa två metoder är vector fitting och Loewner ramverket. Här så presterar Loewner ramverket aningen bättre än vad vector fitting gör. Prestandajämförelsen visar också att vector fitting inte är lika beroende av vilken frekvenssamplingsalgoritm som används, medan Loewner ramverket är mer beroende på vilken frekvenssamplingsalgoritm som används. För Loewner ramverket så visade det sig att Chebyshev-baserade frekvenssamplingsalgoritmer presterade bättre.
480

A case study of how Industry 4.0 will impact on a manual assembly process in an existing production system : Interpretation, enablers and benefits

Nessle Åsbrink, Marcus January 2020 (has links)
The term Industry 4.0, sometimes referred to as a buzzword, is today on everyone’s tongue and the benefits undeniably seem to be promising and have potential to revolutionize the manufacturing industry. But what does it really mean? From a high-level business perspective, the concept of Industry 4.0 most often demonstrates operational efficiency and promising business models but studies show that many companies either lack understanding for the concept and how it should be implemented or are dissatisfied with progress of already implemented solutions. Further, there is a perception that it is difficult to implement the concept without interference with the current production system.The purpose of this study is to interpret and outline the main characteristics and key components of the concept Industry 4.0 and further break down and conclude the potential benefits and enablers for a manufacturing company within the heavy automotive industry. In order to succeed, a case study has been performed at a manual final assembly production unit within the heavy automotive industry. Accordingly, the study intends to give a deeper understanding of the concept and specifically how manual assembly within an already existing manual production system will be affected. Thus outline the crucial enablers in order to successfully implement the concept of Industry 4.0 and be prepared to adapt to the future challenges of the industry. The case study, performed through observations and interviews, attacks the issue from two perspectives; current state and desired state. A theoretical framework is then used as a basis for analysis of the result in order to be able to further present the findings and conclusion of the study. Lastly, two proof of concept are performed to exemplify and support the findings. The study shows that succeeding with implementation of Industry 4.0 is not only about the related technology itself. Equally important parts to be considered and understood are the integration into the existing production system and design and purpose of the manual assembly process. Lastly the study shows that creating understanding and commitment in the organization by strategy, leadership, culture and competence is of greatest importance to succeed. / Begreppet Industri 4.0, ibland benämnt som modeord, är idag på allas tungor och fördelarna verkar onekligen lovande och tros ha potential att revolutionera tillverkningsindustrin. Men vad betyder det egentligen? Ur ett affärsperspektiv påvisar begreppet Industri 4.0 oftast ökad operativ effektivitet och lovande affärsmodeller men flera studier visar att många företag antingen saknar förståelse för konceptet och hur det ska implementeras eller är missnöjda med framstegen med redan implementerade lösningar. Vidare finns det en uppfattning att det är svårt att implementera konceptet utan störningar i det nuvarande produktionssystemet. Syftet med denna studie är att tolka och beskriva huvudegenskaperna och nyckelkomponenterna i konceptet Industri 4.0 och ytterligare bryta ner och konkludera de potentiella fördelarna och möjliggörarna för ett tillverkande företag inom den tunga bilindustrin. För att lyckas har en fallstudie utförts vid en manuell slutmonteringsenhet inom den tunga lastbilsindustrin. Studien avser på så sätt att ge en djupare förståelse för konceptet och specifikt hur manuell montering inom ett redan existerande manuellt produktionssystem kommer att påverkas. Alltså att kartlägga viktiga möjliggörare för att framgångsrikt kunna implementera konceptet Industri 4.0 och på så sätt vara beredd att ta sig an industrins framtida utmaningar. Fallstudien, utförd genom observationer och intervjuer, angriper frågan från två perspektiv; nuläge och önskat läge. Ett teoretiskt ramverk används sedan som underlag för analys av resultatet för att vidare kunna presentera rön och slutsats från studien. Slutligen utförs två experiment för att exemplifiera och stödja resultatet. Studien visar att en framgångsrik implementering av Industri 4.0 troligtvis inte bara handlar om den relaterade tekniken i sig. Lika viktiga delar som ska beaktas och förstås är integrationen i det befintliga produktionssystemet och utformningen och syftet med den manuella monteringsprocessen. Slutligen visar studien att det är av största vikt att skapa förståelse och engagemang i organisationen genom strategi, ledarskap, kultur och kompetens.

Page generated in 0.1244 seconds