• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 16
  • 6
  • 5
  • 4
  • 2
  • 1
  • Tagged with
  • 160
  • 160
  • 86
  • 36
  • 31
  • 30
  • 27
  • 26
  • 25
  • 24
  • 23
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

INTEGRATION OF PRODUCT LIFECYCLE BEHAVIOR INTO COMPONENT DESIGN, MANUFACTURING AND PERFORMANCE ANALYSIS TO REALIZE A DIGITAL TWIN REPRESENTATION THROUGH A MODEL-BASED FEATURE INFORMATION NETWORK

Saikiran Gopalakrishnan (12442764) 22 April 2022 (has links)
<p>  </p> <p>There has been a growing interest within the aerospace industry for shifting towards a digital twin approach, for reliable assessment of individual components during the product lifecycle - across design, manufacturing, and in-service maintenance, repair & overhaul (MRO) stages. The transition towards digital twins relies on continuous updating of the product lifecycle datasets and interoperable exchange of data applicable to components, thereby permitting engineers to utilize current state information to make more-informed downstream decisions. In this thesis, we primarily develop a framework to store, track, update, and retrieve product lifecycle data applicable to a serialized component, its features, and individual locations. </p> <p>From a structural integrity standpoint, the fatigue performance of a component is inherently tied to the component geometry, its material state, and applied loading conditions. The manufacturing process controls the underlying material microstructure, which in turn governs the mechanical properties and ultimately the performance. The processing also controls the residual stress distributions within the component volume, which influences the durability and damage tolerance of the component. Hence, we have demonstrated multiple use cases for fatigue life assessment of critical aerospace components, by using the developed framework for efficiently tracking and retrieving (i) the current geometric state, (ii) the material microstructure state, and (iii) residual stress distributions.</p> <p>Model-based definitions (MBDs) present opportunities to capture both geometric and non-geometric data using 3D computer-aided design (CAD) models, with the overarching aim to disseminate product information across different stages of the lifecycle. MBDs can potentially eliminate error-prone information exchange associated with traditional paper-based drawings and improve the fidelity of component details, captured using 3D CAD models. However, current CAD capabilities limit associating the material information with the component’s shape definition. Furthermore, the material attributes of interest, viz., material microstructures and residual stress distributions, can vary across the component volume. To this end, in the first part of the thesis, we implement a CAD-based tool to store and retrieve metadata using point objects within a CAD model, thereby creating associations to spatial locations within the component. The tool is illustrated for storage and retrieval of bulk residual stresses developed during the manufacturing of a turbine disk component, acquired from process modeling and characterization. Further, variations in residual stress distribution owing to process model uncertainties have been captured as separate instances of the disk’s CAD models to represent part-to-part variability as an analogy to track individual serialized components for digital twins. The propagation of varying residual stresses from these CAD models within the damage tolerance analysis performed at critical locations in the disk has been demonstrated. The combination of geometric and non-geometric data inside the MBD, via storage of spatial and feature varying information, presents opportunities to create digital replica or digital twin(s) of actual component(s) with location-specific material state information.</p> <p>To fully realize a digital twin description of components, it is crucial to dynamically update information tied to a component as it evolves across the lifecycle, and subsequently track and retrieve current state information. Hence, in the second part of the thesis, we propose a dynamic data linking approach to include material information within the MBDs. As opposed to storing material datasets directly within the CAD model in the previous approach, we externally store and update the material datasets and create data linkages between material datasets and features within the CAD models. To this end, we develop a model-based feature information network (MFIN), a software agnostic framework for linking, updating, searching, and retrieving of relevant information across a product’s lifecycle. The use case of a damage tolerance analysis for a compressor bladed-disk (blisk) is demonstrated, wherein Ti-6Al-4V blade(s) are linear friction welded to the Ti-6Al-4V disk, comprising well-defined regions exhibiting grain refinement and high residuals stresses. By capturing the location-specific microstructural information and residual stress fields at the weld regions, this information was accessed within the MFIN and used for downstream damage tolerant analysis. The introduction of the MFIN framework facilitates access to dynamically evolving as well as location-specific data for use within physics-based models.</p> <p>In the third part of thesis, we extend the MFIN framework to enable a physics-based, microstructure sensitive and location-specific fatigue life analysis of a component. Traditionally, aerospace components are treated as monolithic structures during lifing, wherein microstructural information at individual locations are not necessarily considered. The resulting fatigue life estimates are conservative and associated with large uncertainty bounds, especially in components with gradient microstructures or distinct location-specific microstructures, thereby leading to under usage of the component’s capabilities. To improve precision in the fatigue estimates, a location-specific lifing framework is enabled via MFIN, for tracking and retrieval of microstructural information at distinct locations for subsequent use within a crystal plasticity-based fatigue life prediction model. A use case for lifing dual-microstructure heat treated LSHR turbine disk component is demonstrated at two locations, near the bore (fine grains) and near the rim (coarse grains) regions. We employ the framework to access (a) the grain size statistics and (b) the macroscopic strain fields to inform precise boundary conditions for the crystal plasticity finite-element analysis. The illustrated approach to conduct a location-specific predictive analysis of components presents opportunities for tailoring the manufacturing process and resulting microstructures to meet the component’s targeted requirements.</p> <p>For reliably conducting structural integrity analysis of a component, it is crucial to utilize their precise geometric description. The component geometries encounter variations from nominal design geometries, post manufacturing or after service. However, traditionally, stress analyses are based on nominal part geometries during assessment of these components. In the last part of the thesis, we expand the MFIN framework to dynamically capture deviations in the part geometry via physical measurements, to create a new instance of the CAD model and the associated structural analysis. This automated workflow enables engineers for improved decision-making by assessing (i) as-manufactured part geometries that fall outside of specification requirements during the materials review board or (ii) in-service damages in parts during the MRO stages of the lifecycle. We demonstrate a use case to assess the structural integrity of a turbofan blade that had experienced foreign object damage (FOD) during service. The as-designed geometry was updated based on coordinate measurements of the damaged blade surfaces, by applying a NURBS surface fit, and subsequently utilized for downstream finite-element stress analysis. The ramifications of the FOD on the local stresses within the part are illustrated, providing critical information to the engineers for their MRO decisions. The automated flow of information from geometric inspection within structural analysis, enabled by MFIN, presents opportunities for effectively assessing products by utilizing their current geometries and improving decision-making during the product lifecycle.</p>
152

Production 4.0 of Ring Mill 4 Ovako AB

Hassan, Muhammad January 2020 (has links)
Cyber-Physical System (CPS) or Digital-Twin approach are becoming popular in industry 4.0 revolution. CPS not only allow to view the online status of equipment, but also allow to predict the health of tool. Based on the real time sensor data, it aims to detect anomalies in the industrial operation and prefigure future failure, which lead it towards smart maintenance. CPS can contribute to sustainable environment as well as sustainable production, due to its real-time analysis on production. In this thesis, we analyzed the behavior of a tool of Ringvalsverk 4, at Ovako with its twin model (known as Digital-Twin) over a series of data. Initially, the data contained unwanted signals which is then cleaned in the data processing phase, and only before production signal is used to identify the tool’s model. Matlab’s system identification toolbox is used for identifying the system model, the identified model is also validated and analyzed in term of stability, which is then used in CPS. The Digital-Twin model is then used and its output being analyzed together with tool’s output to detect when its start deviate from normal behavior.
153

Improving supply chain visibility within logistics by implementing a Digital Twin : A case study at Scania Logistics / Att förbättra synlighet inom logistikkedjor genom att implementera en Digital Tvilling : En fallstudie på Scania Logistics

BLOMKVIST, YLVA, ULLEMAR LOENBOM, LEO January 2020 (has links)
As organisations adapt to the rigorous demands set by global markets, the supply chains that constitute their logistics networks become increasingly complex. This often has a detrimental effect on the supply chain visibility within the organisation, which may in turn have a negative impact on the core business of the organisation. This paper aims to determine how organisations can benefit in terms of improving their logistical supply chain visibility by implementing a Digital Twin — an all-encompassing virtual representation of the physical assets that constitute the logistics system. Furthermore, challenges related to implementation and the necessary steps to overcome these challenges were examined.  The results of the study are that Digital Twins may prove beneficial to organisations in terms of improving metrics of analytics, diagnostics, predictions and descriptions of physical assets. However, these benefits come with notable challenges — managing implementation and maintenance costs, ensuring proper information modelling, adopting new technology and leading the organisation through the changes that an implementation would entail.  In conclusion, a Digital Twin is a powerful tool suitable for organisations where the benefits outweigh the challenges of the initial implementation. Therefore, careful consideration must be taken to ensure that the investment is worthwhile. Further research is required to determine the most efficient way of introducing a Digital Twin to a logistical supply chain. / I takt med att organisationer anpassar sig till de hårda krav som ställs av den globala marknaden ökar också komplexiteten i deras logistiknätverk. Detta har ofta en negativ effekt på synligheten inom logistikkedjan i organisationen, vilken i sin tur kan ha en negativ påverkan på organisationens kärnverksamhet. Målet med denna studie är att utröna de fördelar som organisationer kan uppnå vad gäller att förbättra synligheten inom deras logistikkedjor genom att implementera en Digital Tvilling — en allomfattande virtuell representation av de fysiska tillgångar som utgör logistikkedjan.  Resultaten av studien är att Digitala Tvillingar kan vara gynnsamma för organisationer när det gäller att förbättra analys, diagnostik, prognoser och beskrivningar av fysiska tillgångar. Implementationen medför dock utmaningar — hantering av implementations- och driftskostnader, utformning av informationsmodellering, anammandet av ny teknik och ledarskap genom förändringsarbetet som en implementering skulle innebära.  Sammanfattningsvis är en Digital Tvilling ett verktyg som lämpar sig för organisationer där fördelarna överväger de utmaningar som tillkommer med implementationen. Därmed bör beslutet om en eventuell implementation endast ske efter noggrant övervägande. Vidare forskning behöver genomföras för att utröna den mest effektiva metoden för att introducera en Digital Tvilling till en logistikkedja.
154

Digital Twin Development and Advanced Process Control for Continuous Pharmaceutical Manufacturing

Yan-Shu Huang (9175667) 25 July 2023 (has links)
<p>To apply Industry 4.0 technologies and accelerate the modernization of continuous pharmaceutical manufacturing, digital twin (DT) and advanced process control (APC) strategies are indispensable. The DT serves as a virtual representation that mirrors the behavior of the physical process system, enabling real-time monitoring and predictive capabilities. Consequently, this facilitates the feasibility of real-time release testing (RTRT) and enhances drug product development and manufacturing efficiency by reducing the need for extensive sampling and testing. Moreover, APC strategies are required to address variations in raw material properties and process uncertainties while ensuring that desired critical quality attributes (CQAs) of in-process materials and final products are maintained. When deviations from quality targets are detected, APC must provide optimal real-time corrective actions, offering better control performance than the traditional open loop-control method. The progress in DT and APC is beneficial in shifting from the paradigm of Quality-by-Test (QbT) to that of Quality-by-Design (QbD) and Quality-by-Control (QbC), which emphasize the importance of process knowledge and real-time information to ensure product quality.</p> <p><br></p> <p>This study focuses on four key elements and their applications in a continuous dry granulation tableting process, including feeding, blending, roll compaction, ribbon milling and tableting unit operations. Firstly, the necessity of a digital infrastructure for data collection and integration is emphasized. An ISA-95-based hierarchical automation framework is implemented for continuous pharmaceutical manufacturing, with each level serving specific purposes related to production, sensing, process control, manufacturing operations, and business planning. Secondly, investigation of process analytical technology (PAT) tools for real-time measurements is highlighted as a prerequisite for effective real-time process management. For instance, the measurement of mass flow rate, a critical process parameter (CPP) in continuous manufacturing, was previously limited to loss-in-weight (LIW) feeders. To overcome this limitation, a novel capacitance-based mass flow sensor, the ECVT sensor, has been integrated into the continuous direct compaction process to capture real-time powder flow rates downstream of the LIW feeders. Additionally, the use of near-infrared (NIR)-based sensor for real-time measurement of ribbon solid fraction in dry granulation processes is explored. Proper spectra selection and pre-processing techniques are employed to transform the spectra into useful real-time information. Thirdly, the development of quantitative models that establish a link between CPPs and CQAs is addressed, enabling effective product design and process control. Mechanistic models and hybrid models are employed to describe the continuous direct compaction (DC) and dry granulation (DG) processes. Finally, applying APC strategies becomes feasible with the aid of real-time measurements and model predictions. Real-time optimization techniques are used to combine measurements and model predictions to infer unmeasured states or mitigate the impact of measurement noise. In this work, the moving horizon estimation-based nonlinear model predictive control (MHE-NMPC) framework is utilized. It leverages the capabilities of MHE for parameter updates and state estimation to enable adaptive models using data from the past time window. Simultaneously, NMPC ensures satisfactory setpoint tracking and disturbance rejection by minimizing the error between the model predictions and setpoint in the future time window. The MHE-NMPC framework has been implemented in the tableting process and demonstrated satisfactory control performance even when plant model mismatch exists. In addition, the application of MHE enables the sensor fusion framework, where at-line measurements and online measurements can be integrated if the past time window length is sufficient. The sensor fusion framework proves to be beneficial in extending the at-line measurement application from just validation to real-time decision-making.</p>
155

DESIGN AND DEVELOPMENT OF A REAL-TIME CYBER-PHYSICAL TESTBED FOR CYBERSECURITY RESEARCH

Vasileios Theos (16615761) 03 August 2023 (has links)
<p>Modern reactors promise enhanced capabilities not previously possible including integration with the smart grid, remote monitoring, reduced operation and maintenance costs, and more efficient operation. . Modern reactors are designed for installation to remote areas and integration to the electric smart grid, which would require the need for secure undisturbed remote control and the implementation of two-way communications and advanced digital technologies. However, two-way communications between the reactor facility, the enterprise network and the grid would require continuous operation data transmission. This would necessitate a deep understanding of cybersecurity and the development of a robust cybersecurity management plan in all reactor communication networks. Currently, there is a limited number of testbeds, mostly virtual, to perform cybersecurity research and investigate and demonstrate cybersecurity implementations in a nuclear environment. To fill this gap, the goal of this thesis is the development of a real-time cyber-physical testbed with real operational and information technology data to allow for cybersecurity research in a representative nuclear environment. In this thesis, a prototypic cyber-physical testbed was designed, built, tested, and installed in PUR-1. The cyber-physical testbed consists of an Auxiliary Moderator Displacement Rod (AMDR) that experimentally simulates a regulating rod, several sensors, and digital controllers mirroring Purdue University Reactor One (PUR-1) operation. The cyber-physical testbed is monitored and controlled remotely from the Remote Monitoring and Simulation Station (RMSS), located in another building with no line of sight to the reactor room. The design, construction and testing of the cyber-physical testbed are presented along with its capabilities and limitations. The cyber-physical testbed network architecture enables the performance of simulated cyberattacks including false data injection and denial of service. Utilizing the RMSS setup, collected information from the cyber-physical testbed is compared with real-time operational PUR-1 data in order to evaluate system response under simulated cyber events. Furthermore, a physics-based model is developed and benchmarked to simulate physical phenomena in PUR-1 reactor pool and provide information about reactor parameters that cannot be collected from reactor instrumentation system.</p>
156

Digital Twin for Firmware and Artificial Intelligence prototyping

Maragno, Gianluca January 2023 (has links)
The forth industrial revolution has risen the born of new mega trends for the improvement of the time to market and the spare of resources in the development and manufacturing of a new product. Among these trends, the Digital Twin (DT) is the one of major interests for developers and strategy analysts. The perfect transposition of a real entity into a digital environment enables the exploration and testing of the different components within the defined object, taking a further step towards a perfect correct-by-design approach. STMicroelectronics (ST) is exploring the benefits that this technology offers to the developers. The company’s primary focus revolves around the creation of SystemC models for the manufactured components so that a co-simulation between an Hardware (HW)/Software (SW) platform and a kinematic simulator is possible. This innovative approach facilitate the comprehensive validation of the designed Firmware (FW), relying on the intricate interplay with sensory aspects influenced by both device behavior and environmental circumstances. Furthermore, many applications nowadays implement an Artificial Intelligence (AI) algorithm: its performance is strictly dependent on the quality of the signals sensed and on the dataset on which the model is built. The creation of a proper DT allows to implement its development during the design phase, creating not only a valid AI for the real product, but also improving the quality and the performance of the model built. This conclusion is proven through the construction of a simple robotic arm implementing an anomaly detection algorithm based on a Machine Learning (ML) model. / Den fjärde industriella revolutionen har gett upphov till nya megatrender för förbättring av time-to-market och spara resurser vid utveckling och tillverkning av tillverkning av en ny produkt. Bland dessa trender är DT av stort intresse för utvecklare och strategianalytiker. Den perfekta överföringen av en verklig enhet till en digital miljö gör det möjligt att utforska och testa de olika komponenter inom det definierade objektet, vilket tar ytterligare ett steg mot en perfekt korrekt-från-design-metod. ST utforskar fördelarna som denna teknologi erbjuder utvecklare. Företagets huvudsakliga fokus kretsar kring skapandet av SystemC-modeller för tillverkade komponenter så att en samkörning mellan en HW/SW och en kinematisk simulator blir möjlig. Denna innovativa metod underlättar den omfattande valideringen av utformad FW och bygger på den intrikata interaktionen med sensoriska aspekter som påverkas av både enhetens beteende och miljöförhållanden. Dessutom implementerar många applikationer nuförtiden en algoritm för AI: dess prestanda är strikt beroende av kvaliteten på de uppfångade signalerna och den dataset på vilken modellen bygger. Skapandet av en korrekt DT möjliggör genomförandet av detta steg under designfasen, vilket inte bara resulterar i en giltig AI för den verkliga produkten utan också förbättrar kvaliteten och prestandan hos den skapade modellen. Denna slutsats bevisas genom konstruktionen av en enkel robotarm som implementerar en algoritm för avvikelsedetektering baserad på en ML model.
157

DIGITAL TWIN: FACTORY DISCRETE EVENT SIMULATION

Zachary Brooks Smith (7659032) 04 November 2019 (has links)
Industrial revolutions bring dynamic change to industry through major technological advances (Freeman & Louca, 2002). People and companies must take advantage of industrial revolutions in order to reap its benefits (Bruland & Smith, 2013). Currently, the 4th industrial revolution, industry is transforming advanced manufacturing and engineering capabilities through digital transformation. Company X’s production system was investigated in the research. Detailed evaluation the production process revealed bottlenecks and inefficiency (Melton, 2005). Using the Digital Twin and Discrete Event Factory Simulation, the researcher gathered factory and production input data to simulate the process and provide a system level, holistic view of Company X’s production system to show how factory simulation enables process improvement. The National Academy of Engineering supports Discrete Event Factory Simulation as advancing Personalized Learning through its ability to meet the unique problem solving needs of engineering and manufacturing process through advanced simulation technology (National Academy of Engineering, 2018). The directed project applied two process optimization experiments to the production system through the simulation tool, 3DExperience wiht the DELMIA application from Dassualt Systemes (Dassault, 2018). The experiment resulted in a 10% improvement in production time and a 10% reduction in labor costs due to the optimization
158

Investigating the Use of Digital Twins to Optimize Waste Collection Routes : A holistic approach towards unlocking the potential of IoT and AI in waste management / Undersökning av användningen av digitala tvillingar för optimering av sophämtningsrutter : Ett holistiskt tillvägagångssätt för att ta del av potentialen för IoT och AI i sophantering

Medehal, Aarati January 2023 (has links)
Solid waste management is a global issue that affects everyone. The management of waste collection routes is a critical challenge in urban environments, primarily due to inefficient routing. This thesis investigates the use of real-time virtual replicas, namely Digital Twins to optimize waste collection routes. By leveraging the capabilities of digital twins, this study intends to improve the effectiveness and efficiency of waste collection operations. The ‘gap’ that the study aims to uncover is hence at the intersection of smart cities, Digital Twins, and waste collection routing. The research methodology comprises of three key components. First, an exploration of five widely used metaheuristic algorithms provides a qualitative understanding of their applicability in vehicle routing, and consecutively waste collection route optimization. Building on this foundation, a simple smart routing scenario for waste collection is presented, highlighting the limitations of a purely Internet of Things (IoT)-based approach. Next, the findings from this demonstration motivate the need for a more data-driven and intelligent solution, leading to the introduction of the Digital Twin concept. Subsequently, a twin framework is developed, which encompasses the technical anatomy and methodology required to create and utilize Digital Twins to optimize waste collection, considering factors such as real-time data integration, predictive analytics, and optimization algorithms. The outcome of this research contributes to the growing concept of smart cities and paves the way toward practical implementations in revolutionizing waste management and creating a sustainable future. / Sophantering är ett globalt problem som påverkar alla, och hantering av sophämtningsrutter är en kritisk utmaning i stadsmiljöer. Den här avhandlingen undersöker användningen av virtuella kopior i realtid, nämligen digitala tvillingar, för att optimera sophämtningsrutter. Genom att utnyttja digitala tvillingars förmågor, avser den här studien att förbättra effektiviteten av sophämtning. Forskningsmetoden består av tre nyckeldelar. Först, en undersökning av fem välanvända Metaheuristika algoritmer som ger en kvalitativ förståelse av deras applicerbarhet i fordonsdirigering och således i optimeringen av sophämtningsrutter. Baserat på detta presenteras ett enkelt smart ruttscenario för sophämtning som understryker bristerna av att bara använda Internet of Things (IoT). Sedan motiverar resultaten av demonstrationen nödvändigheten för en mer datadriven och intelligent lösning, vilket leder till introduktionen av konceptet med digitala tvillingar. Därefter utvecklas ett ramverk för digitala tvillingar som omfattar den tekniska anatomin och metod som krävs för att skapa och använda digitala tvillingar för att optimera sophämtningsrutter. Dessa tar i beaktning faktorer såsom realtidsdataintegrering, prediktiv analys och optimeringsalgoritmer. Slutsatserna av studien bidrar till det växande konceptet av smarta städer och banar väg för praktisk implementation i revolutionerande sophantering och för skapandet för en hållbar framtid.
159

Development and Evaluation of a Machine Vision System for Digital Thread Data Traceability in a Manufacturing Assembly Environment

Alexander W Meredith (15305698) 29 April 2023 (has links)
<p>A thesis study investigating the development and evaluation of a computer vision (CV) system for a manufacturing assembly task is reported. The CV inference results are compared to a Manufacturing Process Plan and an automation method completes a buyoff in the software, Solumina. Research questions were created and three hypotheses were tested. A literature review was conducted recognizing little consensus of Industry 4.0 technology adoption in manufacturing industries. Furthermore, the literature review uncovered the need for additional research within the topic of CV. Specifically, literature points towards more research regarding the cognitive capabilities of CV in manufacturing. A CV system was developed and evaluated to test for 90% or greater confidence in part detection. A CV dataset was developed and the system was trained and validated with it. Dataset contextualization was leveraged and evaluated, as per literature. A CV system was trained from custom datasets, containing six classes of part. The pre-contextualization dataset and post-contextualization dataset was compared by a Two-Sample T-Test and statistical significance was noted for three classes. A python script was developed to compare as-assembled locations with as-defined positions of components, per the Manufacturing Process Plan. A comparison of yields test for CV-based True Positives (TPs) and human-based TPs was conducted with the system operating at a 2σ level. An automation method utilizing Microsoft Power Automate was developed to complete the cognitive functionality of the CV system testing, by completing a buyoff in the software, Solumina, if CV-based TPs were equal to or greater than human-based TPs.</p>
160

Fallstudie om Prediktivt och Tillståndsbaserat Underhåll inom Läkemedelsindustrin / Case study regarding Predictive and Condition-based Maintenance in the Pharmaceutical Industry

Redzovic, Numan, Malki, Anton January 2022 (has links)
Underhåll är en aktivitet som varje produktion vill undvika så mycket som möjligt på grund av kostnaderna och tiden som anknyts till den. Trots detta så är en väl fungerande underhållsverksamhet väsentlig för att främja produktionens funktionssäkerhet och tillgänglighet att tillverka. En effektiv underhållsorganisation går däremot inte ut på att genomföra mer underhåll än vad som egentligen är nödvändigt utan att genomföra underhåll i rätt tid. På traditionellt sätt så genomförs detta genom att ersätta slitage delar och serva utrustningen med fastställda mellanrum för att förebygga att haveri, vilket kallas för förebyggande underhåll. De tidsintervaller som angivits för service bestäms av leverantörerna och grundar sig i en generell uppskattning av slitagedelarnas livslängd utifrån tester och analys. Till skillnad från att köra utrustningen till den går sönder som kallas för Avhjälpande underhåll så kan underhåll genomföras vid lämpliga tider så att det inte påverkar produktion och tillgänglighet. Men de tidsintervall som leverantörerna rekommenderar till företagen garanterar inte att slitage delen håller sig till det intervallet, delarna kan exempelvis rasa tidigare än angivet eller till och med hålla längre. Av denna anledning är det naturliga steget i underhållets utveckling att kunna övervaka utrustningens hälsa i hopp om att kunna förutspå när och varför ett haveri ska uppstå. Den här typen av underhåll kallas för tillståndsbaserat och prediktivt underhåll och medför ultimat tillgänglighet av utrustning och den mest kostnadseffektiva underhållsorganisationen, då god framförhållning och översikt uppnås för att enbart genomföra underhåll när det behövs. Det som gör tillståndsbaserat och prediktivt underhåll möjligt är den fjärde industriella revolutionen “Industri 4.0” och teknologierna som associeras med den som går ut på absolut digitalisering av produktionen och smarta fabriker. Teknologier som IoT, Big Dataanalys och Artificiell Intelligens används för att koppla upp utrustning till nätet med hjälp av givare för att samla in och lagra data som ska användas i analyser för att prognosera dess livslängd. Uppdragsgivaren AstraZeneca i Södertälje tillverkar olika typer av läkemedel som många är livsviktiga för de patienter som tar dessa mediciner. Om AstraZenecas produktion står still på grund av fel i utrustningen kommer det inte enbart medföra stora ekonomiska konsekvenser utan även påverka de människor som med livet förlitar sig på den medicin som levereras. För att försäkra produktionens tillgänglighet har AstraZeneca gjort försök att tillämpa tillståndsbaserat och prediktivt underhåll men det är fortfarande enbart i startgroparna. Eftersom ventilation är kritisk del av AstraZeneca produktion då ett fel i ventilationssystemet resulterar i totalt produktionsstopp i byggnaden förens problemet åtgärdas och anläggningen sanerats blev det även rapportens fokusområde. Arbetets uppgift går därför ut på att undersöka möjligheter för AstraZeneca att utveckla deras prediktiva och tillståndsbaserat underhåll på deras ventilationssystem, för att sedan kunna identifiera och presentera förslag på åtgärder. Dessa förslag analyserades sedan med hjälp av verktygen QFD-Matris och Pugh-Matris för att kunna uppskatta vilket förslag som är mest kostnadseffektivt, funktions effektivt samt vilket förslag som kommer tillföra mest nytta för underhållet på AstraZeneca. / Maintenance is an activity that every production wants to avoid as much as possible due to the costs and the time associated with it. Despite this, a well-functioning maintenance operation is essential to promote the production's availability to manufacture and operational reliability. Running an efficient maintenance operation is not about carrying out more maintenance than is necessary but carrying out the right amount of maintenance at the right time. Traditionally speaking this is done by replacing worn parts and servicing the equipment at set intervals to prevent breakdowns, this method is called preventive maintenance. The intervals specified for service are determined by the suppliers and are based on general estimates of the service life for the spare parts from test and analytics. Preventive maintenance allows for maintenance to be carried out at appropriate time to not affect production and availability unlike running the equipment until breakdown, which is called reactive maintenance. However, these intervals that the suppliers recommend do not guarantee that the parts adhere to the given interval, the part can for example break down earlier than expected or even outlast its prescribed lifetime. Because of this, the natural step in the development of maintenance is giving companies the ability to monitor the health of the equipment in hope of being able to predict potential breakdowns. This is what Condition-Based and predictive maintenance is and it provides the ultimate availability of equipment and the most cost-effective maintenance organization, because the good foresight and overview allows maintenance to be carried out only when needed. The fourth industrial revolution “Industry 4.0”, absolute digitalization of production, smart factories and all the technologies associated with this is what makes this type of maintenance possible. Technologies such as IoT, Big Data Analytics and Artificial Intelligence are used to connect equipment to the network using sensors so that data can be stored and collected to be analyzed to forecast the lifespan of parts and equipment. AstraZeneca in Södertälje manufactures different types of medicine, many of which are vital for the patients who take them. If their production comes to a standstill due to equipment failure, it will not only have major financial consequences but also greatly affect the people who rely on the medicine offered with their lives. To ensure the availability of production, AstraZeneca has made attempts to apply condition-based and predictive maintenance, but it is still only in its infancy. Since ventilation is a critical part of AstraZeneca's production, as a failure here will result in a total production stoppage for the building affected and will not resume before the problem is remedied and the plant is decontaminated, it also became the report's focus area. The task at hand is therefore to investigate the opportunities AstraZeneca must develop their predictive and condition-based maintenance for their ventilation systems, in order to be able to present proposals for measures. The proposals will then be analyzed using tools like the QFD-Matrix and the Pugh-Matrix in order to estimate which is more cost effective, function effective and which one will bring the most benefit to AstraZeneca.

Page generated in 0.0491 seconds