Spelling suggestions: "subject:"datadriven"" "subject:"datadrivet""
371 |
It is Time to Become Data-driven, but How : Depicting a Development Process ModelAndersson, Johan, Gharaie, Amirhossein January 2021 (has links)
Background: The business model (BM) is an essential part of firms and it needs to be innovated continuously to allow firms to stay or become competitive. The process of business model innovation (BMI) unfolds incrementally by re-designing or developing new activities in order to provide value propositions (VP). With increasing availability of data, pressure on BMI to orchestrate their activities towards putting data as a key resource and develop data-driven business models (DDBM) is growing. Problematization: The DDBM provides valuable possibilities by utilizing data to optimize current businesses and create new VPs. However, the development process of DDBMs is outlined as challenging and scarcely reviewed. Purpose: This study aims to explore how a data-driven business model development process looks. More specifically, we adopted this research question: What are the phases and activities of a DDBM development process, and what characterizes this process? Method: This is a qualitative study in which the empirical data was collected through 9 semi-structured interviews where the respondents were divided among three different initiatives. Empirical Findings: This study enriches the existing literature of BMI in general and data-driven business model innovation in particular. Concretely, this study contributes to the process perspective of DDBM development. It helps to unpack the complexity of data engagement in business model development and provides a visual process model as an artefact that shows the anatomy of the process. Additionally, this study resonates with value logics manifestation through the states of artefacts, activities, and cognitions. Conclusions: This study concludes that the DDBM development process is structured with two phases as low data-related and high data-related activities, inheriting seven sub-phases consisting of different activities. Also, this study identified four underlying characteristics of the DDBM development process comprising value co-creation, iterative experiment, ethical and regulatory risk, and adaptable strategy. Future research: Further work is needed to explain the anatomy and structure of the DDBM development process in different contexts to uncover if it captures various complexities of data and increases its generalizability. Furthermore, more research is required to differentiate between different business models and consequently customizing the development process for each type. Future research can also further explore the value co-creation in developing DDBM. In this direction, it would be interesting to consider connecting the field of open innovation to the field of DDBM and, specifically, its role in the DDBMs development process. Another promising avenue for future research would be to go beyond the focus on merely improving the VP to maximize the data monetization, and instead focus on the interplay and role that data has on sustainability.
|
372 |
Predicting the Temporal Dynamics of Turbulent Channels through Deep Learning / Predicering den Tids-Dynamiken i Turbulentakanaler genom DjupinlärningGiuseppe, Borrelli January 2021 (has links)
The interest towrds machine learning applied to turbulence has experienced a fast-paced growth in the last years. Thanks to deep-learning algorithms, flow-control stratigies have been designed, as well as tools to model and reproduce the most relevant turbulent features. In particular, the success of recurrent neural networks (RNNs) has been demonstrated in many recent studies and applications. The main objective of this project is to assess the capability of these networks to reproduce the temporal evolution of a minimal turbulent channel flow. We first obtain a data-driven model based on a modal decomposition in the Fourier domain (FFT-POD) on the time series sampled from the flow. This particular case of turbulent flow allows us to accurately simulate the most relevant coherent structures close to the wall. Long-short-term-memory (LSTM) networks and a Koopman-based framework (KNF) are trained to predict the temporal dynamics of the minimal channel flow modes. Tests with different configurations highlight the limits of the KNF method compared to the LSTM, given the complexity of the data-driven model. Long-term prediction for LSTM show excellent agreement from the statistical point of view, with errors below 2% for the best models. Furthermore, the analysis of the chaotic behaviour thorugh the use of the Lyapunov exponent and of the dynamic behaviour through Pointcaré maps emphasizes the ability of LSTM to reproduce the nature of turbulence. Alternative reduced-order models (ROMS), based on the identification of different turbulent structures, are explored and they continue to show a good potential in predicting the temporal dynamics of the minimal channel.
|
373 |
Data-Driven Decision-Making for Sustainable Manufacturing Operations : An empirical study of supply chain operations within the Swedish manufacturing industry / Datadriven beslutsfattning för hållbara tillverkningsprocesser : En empirisk studie om försörjningskedjor inom den svenska tillverkningsindustrinNilsson, Viktor, Westbroek, Arvid January 2021 (has links)
A paradigm shift is taking place in the manufacturing industry, where companies strive for adopting digital tools to be able to compete against their competitors. The endeavor of becoming digitized is taking place simultaneously as the global awareness of sustainability increases. For the reasons that current literature is experiencing a knowledge gap that links data-driven processes, sustainability, and supply chain operations, there is a need for further exploration within this area. Therefore, the aim of this report is to investigate the business opportunities and challenges of data-driven decision-making, and how it relates to more sustainable supply chain operations within the manufacturing industry. To investigate the area within data-driven decision-making and its impact on manufacturing supply chain operations, a literature review was initially conducted and was followed by interview sessions with case companies and experts. In total, 14 interviews were conducted within the area of sustainability, supply chain operations, and data-driven decision-making. The interviews were conducted to follow the designed framework and thus provide knowledge for the challenges, advantages, applications, and value capture in relation to data-driven decision-making and supply chain operations. Comparing the empirical data with previous literature it was noted that data-driven decision-making entails both multiple challenges and advantages when it comes to improving manufacturers' sustainable performance. The main challenges include establishing efficient information sharing, standardized systems, and obtaining data that shows both reliability and validity. Consequently, by solving these challenges the sustainable benefits can be fulfilled, including a mitigated bullwhip-effect, improved planning, and reduced CO2 emissions. These benefits are driven by the transparency, automatization, and optimization that is incorporated with data-driven decision-making. In conclusion, realizing data-driven decision-making within the manufacturing industry entails several challenges, but if companies overcome the challenges the potential benefits will be unlimited. / Ett paradigmskifte pågår för närvarande i tillverkningsindustrin, där företag strävar efter att använda digitala verktyg för att kunna konkurrera mot sina konkurrenter. Strävan efter att bli digitaliserad sker samtidigt som den globala medvetenheten om hållbarhet ökar. Av anledningarna till att den aktuella litteraturen upplever ett tomrum av kunskap som länkar datadrivna processer, hållbarhet och leveranskedjedrift, så finns det ett behov av ytterligare forskning inom detta område. Målet med denna rapport är därför att undersöka affärsmöjligheterna och utmaningarna med datadrivet beslutsfattande, och hur det relaterar till mer hållbara försörjningskedjor inom tillverkningsindustrin. För att undersöka området inom datadrivet beslutsfattande och dess inverkan på leveranskedjedriften och tillverkningsindustrin så genomfördes först en litteraturundersökning som följdes av intervjussessioner med utvalda företag och experter inom området. Sammanlagt intervjuades nio företag och sex experter som valdes ut efter deras kompetenser inom hållbarhet, leveranskedjedrift och datadrivet beslutsfattande. Intervjuerna genomfördes med hjälp av en intervjuguide och därmed ge kunskap om kopplingarna mellan data, aktuella affärsverksamheter och förbättrad ekonomisk, social och miljöprestanda. Detta inkluderar att utforska utmaningar, fördelar, applikationer och värdefångst i kontext till datadrivet beslutsfattande och leveranskedjedrift. Vid analysen av EMPIRISK data och jämförelse med aktuell litteratur noterades det att datadrivet beslutsfattande medför flera olika utmaningar och fördelar när det gäller att förbättra tillverkningsföretagens hållbara prestanda. De viktigaste utmaningarna är att etablera effektiv informationsdelning, standardiserade system och att erhålla data som visar både tillförlitlighet och giltighet. Genom att hantera dessa utmaningar kan de hållbara fördelarna uppnås, vilket inkluderar en minskad bullwhip-effekt, koldioxidutsläpp och förbättrad planering. Dessa fördelar drivs vidare av transparens, automatisering och optimering som ett datadrivet beslutsfattande medför. Sammanfattningsvis innebär förverkligandet av att använda datadrivet beslutsfattande inom tillverkningsindustrin flera utmaningar, men om företag övervinner utmaningarna kommer de potentiella fördelarna att vara obegränsade.
|
374 |
Data driven customer insights in the B2B sales process at high technology scaleups / Datadrivna kundinsikter i B2B försäljningsprocessen hos högteknologiska scaleupsSTRÖMBERG, HANNA January 2021 (has links)
When scaling a company it is important to implement customer insights to achieve growth of revenue. Understanding and defining a suitable B2B sales process has also been shown to play an important part in enhancing sales, and traditional processes include multiple steps performed by sales representatives. One step revolves around the presentation of the offered product or service. For sales representatives to present a product or service successfully they must acquire or have deep knowledge of the customer, such as their industry trends and general business. This can be achieved by acquiring customer insights that are data driven. Adopting data driven customer insights has also been proven to increase sales. Therefore, this research investigates the connection between the B2B sales process and the generation and implementation of data driven customer insights. In particular, this research explores the steps included in a B2B sales process at a high technology scaleup and hence how data driven customer insights can enhance the presentation step in the B2B sales process. The research is carried out through a case study at a case company labelled as a high technology scaleup. Interviews were conducted with sales representatives working in the commercial team at the case company. The result from this research finds that six steps are included in the B2B sales process at high technology scaleups. The steps are as follows: Lead generation, First meeting, Assessment, Contract proposal, Negotiation and Closed deal. The second step includes presenting the offered product or service, which this research identified as most challenging for the sales representatives to execute successfully due to the technical complexity of the product/service. Findings from this research shows that data driven customer insights can be used to simplify this step in the process. For example, data driven customer insights can help personalize presentation material and enable rapport building. In addition, data driven customer insights help align expectations between buyers and sellers during the first meeting, thus increasing the likelihood of reaching a closed deal. / När ett företag ska skalas upp är det viktigt att implementera kundinsikter för att uppnå ökad omsättning. Att förstå och definiera en passande B2B-försäljningsprocess har också visats spela en viktig roll för nå ökade intäkter, och traditionella säljprocesser innehåller flera steg som säljpersonal utför. Ett steg kretsar kring presentationen av den erbjudna produkten eller tjänsten. För att säljpersonal ska kunna presentera en produkt eller tjänst med framgång behöver de förvärva eller ha djup kunskap om kunden, såsom branschtrender och generell verksamhet. Detta kan uppnås genom att anskaffa kundinsikter som är datadrivna. Att använda datadriven kundinsikt har också visats öka försäljningssiffror. Med detta som bakgrund undersöker därför den här forskningen sambandet mellan B2B-försäljningsprocessen och generering och implementering av datadriven kundinsikt. I synnerhet undersöker denna forskning stegen som ingår i en B2B-försäljningsprocess i ett högteknologiskt scale up och därmed hur datadriven kundinsikt kan förbättra presentationssteget i B2B-försäljningsprocessen. Forskningen utförs genom en fallstudie på ett fallföretag som räknas som ett högteknologiskt scale up. Intervjuer genomfördes med försäljningsmedarbetare som jobbar i det kommersiella teamet på företaget. Resultatet från denna forskning visar att sex steg ingår i B2B-försäljningsprocessen vid högteknologiska scale ups. Dessa sex stegen är: Leadsgenerering, Första möte, Utvärdering, Kontraktsförslag, Förhandling och Avslutad affär. Det andra steget innebär att den erbjudna produkten eller tjänsten presenteras, och detta steg identifieras som mest utmanande för försäljningsmedarbetarna att utföra med framgång på grund av produktens/tjänstens tekniska komplexitet. Vidare visar resultat från denna forskning att datadriven kundinsikt kan användas för att förenkla detta steg i processen. Datadriven kundinsikt kan till exempel hjälpa till att personalisera presentationsmaterial och möjliggöra förtroendebyggande. Dessutom möjliggör datadrivna kundinsikter att köpare och säljare delar gemensamma förväntningar på det första mötet, vilket ökar sannolikheten att uppnå en sluten affär.
|
375 |
Adapting a data-driven battery ageing model to make remaining-useful-life estimations using dynamic vehicle data / Anpassning av datadriven batteriåldringsmodell för uppskattningar av återstående livslängd från dynamiska fordonsdataPhatarphod, Viraj January 2021 (has links)
Transportsektorn är en av världens största producenter av växthusgas därav är dess avkarbonisering essentiell för att uppnå Parisavtalets mål för CO2-emissioner. Ett viktigt steg för att uppnå dessa mål utförs genom elektrifiering. Litium-jon-batterier (eng. litium-ion batteries, ’LIB’) har blivit väldigt populära energilagringssystem för batteridrivna elektriska fordon (eng. battery electric vehicles, ’BEV’) men tenderar att åldras, precis som alla andra batterier. Därav krävs forskning kring batteriföråldring på grund av nedbrytningsprocessernas inverkan på prissättningen, prestationerna och miljöpåverkan av BEV. Olika modeller används för att beskriva batteriernas åldrande. Datadrivna modeller som förutspår batteriers livstid ökar i popularitet vars noggrannhet och prestationer till stor del beror på indatats kvalitet. Formatet för tidsinhämtade data kräver enorma mängder lagringsutrymme, hög processkapacitet och längre processer; något ’reducerad’ eller ’aggregerad’ data delvis åtgärdar. Denna avhandling fokuserar på att utveckla en metodik för användning av dynamiska fordonsdata i ’aggregerad’ form. Tidsloggade data inhämtade från kallklimatstesting av Scanias BEV-prototyp användes varav interaktionseffekterna mellan diverse fordonsparametrar samt deras effekt på batteriåldring utifrån en batteriåldringsmodell analyserades. Olika tillvägagångssätt för strukturering av dynamiska fordonsdata i modellen undersöktes också. Tolv aggregeringsscenarion designades och testades. Dessutom valdes tre scenarion för uppskattningar och jämförelser av återstående användbar livslängd (eng. remaining-useful-life, ’RUL’) tillsammans med resultat från tidsinhämtade data. Slutligen drogs slutsatser om: parameterinteraktioner, struktur av dynamiska fordonsdata och RUL. Flera framtida utvecklingsområden har också föreslagits bland annat: tester av andra aggregeringstekniker, utöka modellen till tjänstefordon samt kategorisera användningsbeteenden av fordon för att förbättra RUL-uppskattningar. / The transport sector is one of the world’s largest greenhouse gas producing sector and it’s decarbonisation is imperative to achieve the CO2 emission targets set by the Paris Agreement. One important step towards achieving these targets is through electrification of the sector. Lithium-ion batteries (LIBs) have become very popular energy storage systems for battery electric vehicles (BEVs). However, LIBs like all other batteries, tend to age. Hence, the study of the battery ageing phenomena is very essential since the degradation in battery characteristics hugely determines the cost, performance and the environmental impact of BEVs. Different modelling approaches are used to represent battery ageing behaviour. Data-driven models for predicting the lifetime of batteries are becoming popular. However, the accuracy and performance of data-driven models largely depends upon the quality of data being used as the input. Time-sampled format of logging data results in huge data files requiring enormous amounts of storage space, high processing power requirements and longer processing times. Instead, using data in a ’reduced’ or ‘aggregated’ form can help in addressing these issues. This thesis work focuses on developing a methodology for using dynamic vehicle data in an ‘aggregated’ form. Time-sampled data from a Scania prototype BEV truck, recorded during cold climate test, was used. The interaction effects between various vehicle parameters and their effect on battery ageing in a battery ageing model were analyzed. Different approaches to structuring dynamic vehicle data for use in the model were also studied. Twelve aggregation scenarios were designed and tested. Furthermore, three scenarios were selected for making remaining-useful-life (RUL) estimations and compared alongside time-sampled data results. Finally, conclusions about parameter interactions, structuring of dynamic vehicle data and RUL estimations were drawn. Several next steps for future work have also been suggested such as testing other aggregation techniques, extending the model to vehicle fleets and categorizing vehicle usage behaviours to make better RUL estimations.
|
376 |
Data-Driven Reachability Analysis of Pedestrians Using Behavior Modes : Reducing the Conservativeness in Data-Driven Pedestrian Predictions by Incorporating Their Behavior / Datadriven Nåbarhetsanalys av Fotgängare som Använder Beteendelägen : Reducerar Konservativiteten i Datadriven Fotgängarpredicering Genom att Integrera Deras BeteendeSöderlund, August January 2023 (has links)
Predicting the future state occupancies of pedestrians in urban scenarios is a challenging task, especially considering that conventional methods need an explicit model of the system, hence introducing data-driven reachability analysis. Data-driven reachability analysis uses data, inherently produced by an unknown system, to perform future state predictions using sets, generally represented by zonotopes. These predicted sets are generally more conservative than model-based reachable sets. Therefore, is it possible to cluster previously recorded trajectory data based on the expressed behavior and perform the predictions on each cluster to still be able to provide safety guarantees? The theory behind data-driven reachability analysis, which can handle input noise and model uncertainties and still provide safety guarantees, is quite recent. This means that previous implementations for predicting pedestrians are theoretically probabilistic and would not be appropriate to implement in actual systems. Thus, this thesis is not the first of its kind in predicting the future reachable sets for pedestrians using clustered behavioral data, but it is the first work that provides safety guarantees in the process. The method proposed in this thesis first labels the historically recorded trajectories into the behavior also referred to as mode, the pedestrian expressed, which is done by simple conditional statements. This is done offline. However, this implementation is designed to be modular enabling easier improvements to the labelling system. Then, the reachable sets are computed for each behavior separately, which enables a potential motion planner to decide on which modal sets are relevant for specific scenarios. Theoretically, this method provides safety guarantees. The outcomes of this method were more descriptive reachable sets, meaning that the predicted areas intersected areas that it reasonably should, and did not intersect areas it reasonably should not. Also, the volume of the zonotopes for the modal sets was observed to be smaller than the volume of the implemented baseline, indicating fewer over-approximations and less conservative predictions. These results enable more efficient path planning for Connected and Autonomous Vehicles (CAVs), thus reducing fuel consumption and brake wear. / Att predicera framtida tillstånd för fotgängare i urbana situationer är en utmaning, speciellt med tanke på att konventionella metoder behöver uttryckligen en modell av systemet, därav introduceringen av datadriven nåbarhetsanalys. Datadriven nåbarhetsanalys använder data, naturligt producerad av ett okänt system, för att genomföra framtida tillståndspredicering med hjälp av matematiska set, generellt representerade av zonotoper. Dessa predicerade sets är generellt sett mode konservativa än modellbaserade nåbara set. Därmed, är det möjligt att dela upp historiskt inspelade banor baserat på det uttryckta beteendet och genomföra prediceringar på varje kluster och bibehålla säkerhetsgarantier? Teorin bakom datadriven nåbarhetsanalys, som kan hantera brus i indatat och modellosäkerheter och bibehålla säkerhetsgarantier, är väldigt ny. Detta betyder att tidigare implementationer för att predicera fotgängare är, teoretiskt sett, probabilistiska och är inte lämpliga att implementera i riktiga system. Därmed, detta examensarbete är inte det första som predicerar framtida nåbara set för fotgängare genom att använda kluster för beteendedatat, men den är det första arbetet som bibehåller säkerhetsgarantier i processen. Den introducerade metoden i detta examensarbete rubricerar först de tidigare inspelade banorna baserat på beteendet, även kallat läget, som fotgängaren uttrycker, vilket är gjort genom simpla betingade påståenden. Detta görs offline. Dock, denna implementation är designad till att vara modulär vilket underlättar förbättringar till rubriceringssystemet. Fortsättningsvis, beräknas de nåbara seten för varje beteende separat, vilket möjliggör att en potentiell rörelseplanerare kan avgöra vilka beteendeset som är relevanta för specifika scenarion. Teoretiskt sett så ger denna metod säkerhetsgarantier. Resultaten från denna metod var först och främst mer beskrivande nåbara set, vilket betyder att de predicerade områdena korsar områden som de rimligtvis ska korsa, och inte korsar område som de rimligen inte ska korsa. Dessutom, volymen på zonotoperna for beteendeseten observerades att vara mindre än volymen för baslinjeseten, vilket indikerar lägre överskattningar och mindre konservativa prediceringar. Dessa resultat möjliggör mer effektiv rörelseplanering för uppkopplade och autonoma fordon, vilket reducerar bränsleförbrukningen och bromsslitage.
|
377 |
Online Learning with Sample SelectionGao, Cong January 2021 (has links)
In data-driven network and systems engineering, we often train models offline using measurement data collected from networks. Offline learning achieves good results but has drawbacks. For example, model training incurs a high computational cost and the training process takes a long time. In this project, we follow an online approach for model training. The approach involves a cache of fixed size to store measurement samples and recomputation of ML models based on the current cache. Key to this approach are sample selection algorithms that decide which samples are stored in the cache and which are evicted. We implement three sample selection methods in this project: reservoir sampling, maximum entropy sampling and maximum coverage sampling. In the context of sample selection methods, we evaluate model recomputation methods to control when to retrain the model using the samples in the current cache and use the retrained model to predict the following samples before the next recomputation moment. We compare three recomputation strategies: no recomputation, periodic recomputation and recomputation using the ADWIN algorithm. We evaluate the three sample selection methods on five datasets. One of them is the FedCSIS 2020 Challenge dataset and the other four are KTH testbed datasets. We find that maximum entropy sampling can achieve quite good performance compared to other sample selection methods and that recomputation using the ADWIN algorithm can help reduce the number of recomputations and does not affect the prediction performance. / Vid utveckling och underhåll av datornätverk och system så används ofta maskininlärningsmodeller (ML) som beräknats offline med datavärden som insamlats från nätverket. Att beräkna ML-modeller offline ger bra resultat men har nackdelar. Beräkning av ML-modeller är tidskrävande och medför en hög beräkningskostnad. I detta projekt undersöker vi en metod för att beräkna ML-modeller online. Metoden använder en cache av fixerad storlek för att lagra mätningsvärden och omberäknar ML-modeller baserat på innehållet i cachen. Nyckeln till denna metod är användandet av urvalsalgoritmer som avgör vilka mätningsvärden som ska lagras i cachen och vilka som ska tas bort. Vi tillämpar tre urvalsmetoder: urval baserat på en behållare av fixerad storlek, urval baserat på maximal entropi, samt urval baserat på maximal täckning. Vid användning av urvalsmetoder så utvärderar vi metoder för att avgöra när en ML-modell ska omberäknas baserat på urvalet i cachen. Den omberäknade ML-modellen används sedan för att göra prediktioner tills dess att modellen omberäknas igen. Vi utvärderar tre strategier för att avgöra när en modell ska omberäknas: ingen omberäkning, periodisk omberäkning, samt omberäkning baserat på ADWIN-algoritmen. Vi utvärderar tre urvalsmetoder på fem olika datauppsättningar. En av datauppsättningarna är baserat på FedCSIS 2020 Challenge och de andra fyra datauppsättningarna har insamlats från en testbädd på KTH. Vi _nner att urval baserat på maximal entropi uppnår bra prestanda jämfört med de andra urvalsmetoderna samt att en omberäkningstrategi baserat på ADWIN-algoritmen kan minska antalet omberäkningar och försämrar inte prediktionsprestandan.
|
378 |
Data-Driven Variational Multiscale Reduced Order Modeling of Turbulent FlowsMou, Changhong 16 June 2021 (has links)
In this dissertation, we consider two different strategies for improving the projection-based reduced order model (ROM) accuracy: (I) adding closure terms to the standard ROM; (II) using Lagrangian data to improve the ROM basis.
Following strategy (I), we propose a new data-driven reduced order model (ROM) framework that centers around the hierarchical structure of the variational multiscale (VMS) methodology and utilizes data to increase the ROM accuracy at a modest computational cost. The VMS methodology is a natural fit for the hierarchical structure of the ROM basis: In the first step, we use the ROM projection to separate the scales into three categories: (i) resolved large scales, (ii) resolved small scales, and (iii) unresolved scales. In the second step, we explicitly identify the VMS-ROM closure terms, i.e., the terms representing the interactions among the three types of scales. In the third step, we use available data to model the VMS-ROM closure terms. Thus, instead of phenomenological models used in VMS for standard numerical discretizations (e.g., eddy viscosity models), we utilize available data to construct new structural VMS-ROM closure models. Specifically, we build ROM operators (vectors, matrices, and tensors) that are closest to the true ROM closure terms evaluated with the available data. We test the new data-driven VMS-ROM in the numerical simulation of four test cases: (i) the 1D Burgers equation with viscosity coefficient $nu = 10^{-3}$; (ii) a 2D flow past a circular cylinder at Reynolds numbers $Re=100$, $Re=500$, and $Re=1000$; (iii) the quasi-geostrophic equations at Reynolds number $Re=450$ and Rossby number $Ro=0.0036$; and (iv) a 2D flow over a backward facing step at Reynolds number $Re=1000$. The numerical results show that the data-driven VMS-ROM is significantly more accurate than standard ROMs.
Furthermore, we propose a new hybrid ROM framework for the numerical simulation of fluid flows. This hybrid framework incorporates two closure modeling strategies: (i) A structural closure modeling component that involves the recently proposed data-driven variational multiscale ROM approach, and (ii) A functional closure modeling component that introduces an artificial viscosity term. We also utilize physical constraints for the structural ROM operators in order to add robustness to the hybrid ROM. We perform a numerical investigation of the hybrid ROM for the three-dimensional turbulent channel flow at a Reynolds number $Re = 13,750$.
In addition, we focus on the mathematical foundations of ROM closures. First, we extend the verifiability concept from large eddy simulation to the ROM setting. Specifically, we call a ROM closure model verifiable if a small ROM closure model error (i.e., a small difference between the true ROM closure and the modeled ROM closure) implies a small ROM error. Second, we prove that a data-driven ROM closure (i.e., the data-driven variational multiscale ROM) is verifiable.
For strategy (II), we propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs' accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis. / Doctor of Philosophy / Reduced order models (ROMs) are popular in physical and engineering applications: for example, ROMs are widely used in aircraft designing as it can greatly reduce computational cost for the aircraft's aeroelastic predictions while retaining good accuracy. However, for high Reynolds number turbulent flows, such as blood flows in arteries, oil transport in pipelines, and ocean currents, the standard ROMs may yield inaccurate results. In this dissertation, to improve ROM's accuracy for turbulent flows, we investigate three different types of ROMs. In this dissertation, both numerical and theoretical results show that the proposed new ROMs yield more accurate results than the standard ROM and thus can be more useful.
|
379 |
DATA-DRIVEN APPROACHES FOR UNCERTAINTY QUANTIFICATION WITH PHYSICS MODELSHuiru Li (18423333) 25 April 2024 (has links)
<p dir="ltr">This research aims to address these critical challenges in uncertainty quantification. The objective is to employ data-driven approaches for UQ with physics models.</p>
|
380 |
ENHANCING AUTOMOTIVE MANUFACTURING QUALITY AND REDUCING VARIABILITY : THROUGH SIX SIGMA PRINCIPLESCholakkal, Mohamed Jasil, Chettiyam Thodi, Nisar Ahamed January 2024 (has links)
The dissertation "Enhancing Automotive Manufacturing Quality and Reducing Variability Through Six Sigma Principles" provides a thorough analysis of the ways in which Six Sigma techniques can be applied to the automotive manufacturing sector to improve quality control, reduce variability, and boost operational efficiency. Utilizing a diverse of secondary data sources, such as industry reports, case studies, academic research articles, and one-on-one consultations, this study seeks to offer important insights into the implementation and efficacy of Six Sigma principles in the context of automotive manufacturing. By stressing the fundamental ideas of Six Sigma outlined by Deming and Juran and scrutinizing influential works in quality management, the literature study builds a solid theoretical basis. The study's goals and research questions centre on comprehending how Six Sigma improves quality and lowers variability in automobile production processes. This research finds important insights on how Six Sigma may improve quality control, lower process variability, and increase operational efficiency in the automobile manufacturing industry via thorough secondary data analysis. The research offers useful insights into using Six Sigma approaches, emphasizing the significance of staff involvement, data-driven decision-making, and leadership commitment in guaranteeing the success of Six Sigma projects. The thesis ends with suggestions for further research, such as investigating primary data gathering techniques, contrasting this methodology with other approaches to quality management, and using longitudinal analysis to monitor the long-term effects of Six Sigma projects. In summary, this dissertation advances our knowledge of how Six Sigma concepts may be used to promote operational excellence and continuous improvement in the automobile manufacturing sector. It also provides practitioners and stakeholders in the industry with insightful information
|
Page generated in 0.0556 seconds