61 |
Analysis, Diagnosis and Design for System-level Signal and Power Integrity in Chip-package-systemsAmbasana, Nikita January 2017 (has links) (PDF)
The Internet of Things (IoT) has ushered in an age where low-power sensors generate data which are communicated to a back-end cloud for massive data computation tasks. From the hardware perspective this implies co-existence of several power-efficient sub-systems working harmoniously at the sensor nodes capable of communication and high-speed processors in the cloud back-end. The package-board system-level design plays a crucial role in determining the performance of such low-power sensors and high-speed computing and communication systems. Although there exist several commercial solutions for electromagnetic and circuit analysis and verification, problem diagnosis and design tools are lacking leading to longer design cycles and non-optimal system designs. This work aims at developing methodologies for faster analysis, sensitivity based diagnosis and multi-objective design towards signal integrity and power integrity of such package-board system layouts.
The first part of this work aims at developing a methodology to enable faster and more exhaustive design space analysis. Electromagnetic analysis of packages and boards can be performed in time domain, resulting in metrics like eye-height/width and in frequency domain resulting in metrics like s-parameters and z-parameters. The generation of eye-height/width at higher bit error rates require longer bit sequences in time domain circuit simulation, which is compute-time intensive. This work explores learning based modelling techniques that rapidly map relevant frequency domain metrics like differential insertion-loss and cross-talk, to eye-height/width therefore facilitating a full-factorial design space sweep. Numerical results performed with artificial neural network as well as least square support vector machine on SATA 3.0 and PCIe Gen 3 interfaces generate less than 2% average error with order of magnitude speed-up in eye-height/width computation.
Accurate power distribution network design is crucial for low-power sensors as well as a cloud sever boards that require multiple power level supplies. Achieving target power-ground noise levels for low power complex power distribution networks require several design and analysis cycles. Although various classes of analysis tools, 2.5D and 3D, are commercially available, the presence of design tools is limited. In the second part of the thesis, a frequency domain mesh-based sensitivity formulation for DC and AC impedance (z-parameters) is proposed. This formulation enables diagnosis of layout for maximum impact in achieving target specifications. This sensitivity information is also used for linear approximation of impedance profile updates for small mesh variations, enabling faster analysis.
To enable designing of power delivery networks for achieving target impedance, a mesh-based decoupling capacitor sensitivity formulation is presented. Such an analytical gradient is used in gradient based optimization techniques to achieve an optimal set of decoupling capacitors with appropriate values and placement information in package/boards, for a given target impedance profile. Gradient based techniques are far less expensive than the state of the art evolutionary optimization techniques used presently for a decoupling capacitor network design. In the last part of this work, the functional similarities between package-board design and radio frequency imaging are explored. Qualitative inverse-solution methods common to the radio frequency imaging community, like Tikhonov regularization and Landweber methods are applied to solve multi-objective, multi-variable signal integrity package design problems. Consequently a novel Hierarchical Search Linear Back Projection algorithm is developed for an efficient solution in the design space using piecewise linear approximations. The presented algorithm is demonstrated to converge to the desired signal integrity specifications with minimum full wave 3D solve iterations.
|
62 |
Deep Learning Studies for Vision-based Condition Assessment and Attribute Estimation of Civil Infrastructure SystemsFu-Chen Chen (7484339) 14 January 2021 (has links)
Structural health monitoring and building assessment are crucial to acquire structures’ states and maintain their conditions. Besides human-labor surveys that are subjective, time-consuming, and expensive, autonomous image and video analysis is a faster, more efficient, and non-destructive way. This thesis focuses on crack detection from videos, crack segmentation from images, and building assessment from street view images. For crack detection from videos, three approaches are proposed based on local binary pattern (LBP) and support vector machine (SVM), deep convolution neural network (DCNN), and fully-connected network (FCN). A parametric Naïve Bayes data fusion scheme is introduced that registers video frames in a spatiotemporal coordinate system and fuses information based on Bayesian probability to increase detection precision. For crack segmentation from images, the rotation-invariant property of crack is utilized to enhance the segmentation accuracy. The architectures of several approximately rotation-invariant DCNNs are discussed and compared using several crack datasets. For building assessment from street view images, a framework of multiple DCNNs is proposed to detect buildings and predict their attributes that are crucial for flood risk estimation, including founding heights, foundation types (pier, slab, mobile home, or others), building types (commercial, residential, or mobile home), and building stories. A feature fusion scheme is proposed that combines image feature with meta information to improve the predictions, and a task relation encoding network (TREncNet) is introduced that encodes task relations as network connections to enhance multi-task learning.
|
63 |
A study of transfer learning on data-driven motion synthesis frameworks / En studie av kunskapsöverföring på datadriven rörelse syntetiseringsramverkChen, Nuo January 2022 (has links)
Various research has shown the potential and robustness of deep learning-based approaches to synthesise novel motions of 3D characters in virtual environments, such as video games and films. The models are trained with the motion data that is bound to the respective character skeleton (rig). It inflicts a limitation on the scalability and the applicability of the models since they can only learn motions from one particular rig (domain) and produce motions in that domain only. Transfer learning techniques can be used to overcome this issue and allow the models to better adapt to other domains with limited data. This work presents a study of three transfer learning techniques for the proposed Objective-driven motion generation model (OMG), which is a model for procedurally generating animations conditioned on positional and rotational objectives. Three transfer learning approaches for achieving rig-agnostic encoding (RAE) are proposed and experimented with: Feature encoding (FE), Feature clustering (FC) and Feature selection (FS), to improve the learning of the model on new domains with limited data. All three approaches demonstrate significant improvement in both the performance and the visual quality of the generated animations, when compared to the vanilla performance. The empirical results indicate that the FE and the FC approaches yield better transferring quality than the FS approach. It is inconclusive which of them performs better, but the FE approach is more computationally efficient, which makes it the more favourable choice for real-time applications. / Många studier har visat potentialen och robustheten av djupinlärningbaserade modeller för syntetisering av nya rörelse för 3D karaktärer i virtuell miljö, som datorspel och filmer. Modellerna är tränade med rörelse data som är bunden till de respektive karaktärskeletten (rig). Det begränsar skalbarheten och tillämpningsmöjligheten av modellerna, eftersom de bara kan lära sig av data från en specifik rig (domän) och därmed bara kan generera animationer i den domänen. Kunskapsöverföringsteknik (transfer learning techniques) kan användas för att överkomma denna begränsning och underlättar anpassningen av modeller på nya domäner med begränsade data. I denna avhandling presenteras en studie av tre kunskapsöverföringsmetoder för den föreslagna måldriven animationgenereringsnätverk (OMG), som är ett neural nätverk-baserad modell för att procedurellt generera animationer baserade på positionsmål och rotationsmål. Tre metoder för att uppnå rig-agnostisk kodning är presenterade och experimenterade: Feature encoding (FE), Feature clustering (FC) and Feature selection (FS), för att förbättra modellens lärande på nya domäner med begränsade data. All tre metoderna visar signifikant förbättring på både prestandan och den visuella kvaliteten av de skapade animationerna, i jämförelse med den vanilla prestandan. De empiriska resultaten indikerar att både FE och FC metoderna ger bättre överföringskvalitet än FS metoden. Det går inte att avgöra vilken av de presterar bättre, men FE metoden är mer beräkningseffektiv, vilket är fördelaktigt för real-time applikationer.
|
64 |
HIGH-THROUGHPUT CALCULATIONS AND EXPERIMENTATION FOR THE DISCOVERY OF REFRACTORY COMPLEX CONCENTRATED ALLOYS WITH HIGH HARDNESSAustin M Hernandez (12468585) 27 April 2022 (has links)
<p>Ni-based superalloys continue to exert themselves as the industry standards in high stress and highly corrosive/oxidizing environments, such as are present in a gas turbine engine, due to their excellent high temperature strengths, thermal and microstructural stabilities, and oxidation and creep resistances. Gas turbine engines are essential components for energy generation and propulsion in the modern age. However, Ni-based superalloys are reaching their limits in the operating conditions of these engines due to their melting onset temperatures, which is approximately 1300 °C. Therefore, a new class of materials must be formulated to surpass the capabilities Ni-based superalloys, as increasing the operating temperature leads to increased efficiency and reductions in fuel consumption and greenhouse gas emissions. One of the proposed classes of materials is termed refractory complex concentrated alloys, or RCCAs, which consist of 4 or more refractory elements (in this study, selected from: Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, and W) in equimolar or near-equimolar proportions. So far, there have been highly promising results with these alloys, including far higher melting points than Ni-based superalloys and outstanding high-temperature strengths in non-oxidizing environments. However, improvements in room temperature ductility and high-temperature oxidation resistance are still needed for RCCAs. Also, given the millions of possible alloy compositions spanning various combinations and concentrations of refractory elements, more efficient methods than just serial experimental trials are needed for identifying RCCAs with desired properties. A coupled computational and experimental approach for exploring a wide range of alloy systems and compositions is crucial for accelerating the discovery of RCCAs that may be capable of replacing Ni-based superalloys. </p>
<p>In this thesis, the CALPHAD method was utilized to generate basic thermodynamic properties of approximately 67,000 Al-bearing RCCAs. The alloys were then down-selected on the basis of certain criteria, including solidus temperature, volume percent BCC phase, and aluminum activity. Machine learning models with physics-based descriptors were used to select several BCC-based alloys for fabrication and characterization, and an active learning loop was employed to aid in rapid alloy discovery for high hardness and strength. This method resulted in rapid identification of 15 BCC-based, four component, Al-bearing RCCAs exhibiting room-temperature Vickers hardness from 1% to 35% above previously reported alloys. This work exemplifies the advantages of utilizing Integrated Computational Materials Engineering- and Materials Genome Initiative-driven approaches for the discovery and design of new materials with attractive properties.</p>
<p> </p>
<p><br></p>
|
65 |
Measuring the Technical and Process Benefits of Test Automation based on Machine Learning in an Embedded Device / Undersökning av teknik- och processorienterade fördelar med testautomation baserad på maskininlärning i ett inbyggt systemOlsson, Jakob January 2018 (has links)
Learning-based testing is a testing paradigm that combines model-based testing with machine learning algorithms to automate the modeling of the SUT, test case generation, test case execution and verdict construction. A tool that implements LBT been developed at the CSC school at KTH called LBTest. LBTest utilizes machine learning algorithms with off-the-shelf equivalence- and model-checkers, and the modeling of user requirements by propositional linear temporal logic. In this study, it is be investigated whether LBT may be suitable for testing a micro bus architecture within an embedded telecommunication device. Furthermore ideas to further automate the testing process by designing a data model to automate user requirement generation are explored. / Inlärningsbaserad testning är en testningsparadigm som kombinerar model-baserad testning med maskininlärningsalgoritmer för att automatisera systemmodellering, testfallsgenering, exekvering av tester och utfallsbedömning. Ett verktyg som är byggt på LBT är LBTest, utvecklat på CSC skolan på KTH. LBTest nyttjar maskininlärningsalgoritmer med färdiga ekvivalent- och model-checkers, och modellerar användarkrav med linjär temporal logik. I denna studie undersöks det om det är lämpat att använda LBT för att testa en mikrobus arkitektur inom inbyggda telekommunikationsenheter. Utöver det undersöks även hur testprocessen skulle kunna ytterligare automatiseras med hjälp av en data modell för att automatisera generering av användarkrav.
|
66 |
Machine Learning-Based Predictive Methods for Polyphase Motor Condition MonitoringDavid Matthew LeClerc (13048125) 29 July 2022 (has links)
<p> This paper explored the application of three machine learning models focused on predictive motor maintenance. Logistic Regression, Sequential Minimal Optimization (SMO), and NaïveBayes models. A comparative analysis of these models illustrated that while each had an accuracy greater than 95% in this study, the Logistic Regression Model exhibited the most reliable operation.</p>
|
67 |
Exploring Alignment Methods in an Audio Matching Scenario for a Music Practice Tool : A Study of User Demands, Technical Aspects, and a Web-Based Implementation / Utforskning av metoder för delsekvensjustering i ett ljudmatchnings scenario för ett musikövningssverktyg : En studie av användarkrav, tekniska aspekter och en webbaserad implementationFerm, Oliwer January 2024 (has links)
This work implements a prototype of a music practice tool, and evaluates alignment methods in an audio matching scenario required for it. By two interviews with piano teachers, we investigated the user demands towards a music performance practice tool that incorporates an alignment technique between a shorter practice segment and a reference performance, from a jazz and classical music point of view. Regarding technical aspects, we studied how Deep Learning (DL) based signal representations compare to standard manually tailored features in the alignment task. Experiments were conducted using a well-known alignment algorithm on a piano dataset. The dataset had manually annotated beat positions which was used for evaluation. We found the traditional features to be superior compared with the DL based signal representations when used independently. We also found that the DL based signal representations, on their own, were insufficient for our test cases. However we found that the DL representations contained valuable information. Multiple test cases demonstrated that the combination of DL representations and traditional representations outperformed all other considered approaches. We also did experiments using deadpan midi renditions as references instead of actual performances, in which we got slight, but insignificant improvement in alignment performance. Finally, the prototype was implemented as a website, using a traditional signal representation as input to the alignment algorithm. / Detta arbete implementerar en prototyp av ett musikövningsverktyg och utvärderar ljudjusteringssmetoder som krävs för det. Användarkraven för verktyget undersöktes genom två intervjuer med pianolärare och fokuserade på ljudmatchning mellan en kort övningsinspelning och en referensinspelning, fokuserat på jazz och klassisk musik. De tekniska aspekterna inkluderade en jämförelse mellan djupinlärningsbaserade signalrepresentationer och traditionella manuellt anpassade funktioner i ljudmatchningsuppgiften. Experiment utfördes på ett pianodataset med en välkänd ljudjusterings algoritm, anpassad för ljudmatchning. Datasetet hade manuellt annoterade taktpositioner som användes för utvärdering. Vi fann att de traditionella funktionerna var överlägsna jämfört med djupinlärningsbaserade signalrepresentationer när de användes ensamma. Vi fann också att djupinlärningsbaserade-baserade signalrepresentationer, ensamma, var otillräckliga för våra testfall. Dock upptäckte vi att de djupinlärningsbaserade representationerna innehöll värdefull information. Flera testfall visade att kombinationen av djupinlärnings-representationer och traditionella representationer överträffade alla andra övervägda metoder. Test med midi-renderade inspelningar som referenser visade en svag, men insignifikant förbättring i prestanda. Slutligen implementerades en prototyp av övningsverktyget som en webbplats, med en traditionell signalrepresentation som inmatning till matchningsalgoritmen.
|
68 |
Automatic Burns Analysis Using Machine LearningAbubakar, Aliyu January 2022 (has links)
Burn injuries are a significant global health concern, causing high mortality and morbidity rates. Clinical assessment is the current standard for diagnosing burn injuries, but it suffers from interobserver variability and is not suitable for intermediate burn depths. To address these challenges, machine learning-based techniques were proposed to evaluate burn wounds in a thesis. The study utilized image-based networks to analyze two medical image databases of burn injuries from Caucasian and Black-African cohorts. The deep learning-based model, called BurnsNet, was developed and used for real-time processing, achieving high accuracy rates in discriminating between different burn depths and pressure ulcer wounds. The multiracial data representation approach was also used to address data representation bias in burn analysis, resulting in promising performance. The ML approach proved its objectivity and cost-effectiveness in assessing burn depths, providing an effective adjunct for clinical assessment. The study's findings suggest that the use of machine learning-based techniques can reduce the workflow burden for burn surgeons and significantly reduce errors in burn diagnosis. It also highlights the potential of automation to improve burn care and enhance patients' quality of life. / Petroleum Technology Development Fund (PTDF);
Gombe State University study fellowship
|
69 |
[pt] MONITORAMENTO DE MORANGOS: DETECÇÃO, CLASSIFICAÇÃO E SERVOVISÃO / [en] STRAWBERRY MONITORING: DETECTION, CLASSIFICATION, AND VISUAL SERVOINGGABRIEL LINS TENORIO 27 August 2024 (has links)
[pt] O presente trabalho inicia com uma investigação sobre o uso de modelos
de Aprendizado Profundo 3D para a detecção aprimorada de morangos em
túneis de cultivo. Focou-se em duas tarefas principais: primeiramente, a
detecção de frutas, comparando o modelo original MaskRCNN com uma
versão adaptada que integra informações de profundidade (MaskRCNN-D).
Ambos os modelos são capazes de classificar morangos baseados em sua
maturidade (maduro, não maduro) e estado de saúde (afetados por doença
ou fungo). Em segundo lugar, focou-se em identificar a região mais ampla
dos morangos, cumprindo um requisito para um sistema de espectrômetro
capaz de medir o conteúdo de açúcar das frutas. Nesta tarefa, comparouse um algoritmo baseado em contorno com uma versão aprimorada do
modelo VGG-16. Os resultados demonstram que a integração de dados
de profundidade no MaskRCNN-D resulta em até 13.7 por cento de melhoria no
mAP através de diversos conjuntos de teste de morangos, incluindo os
simulados, enfatizando a eficácia do modelo em cenários agrícolas reais e
simulados. Além disso, nossa abordagem de solução ponta-a-ponta, que
combina a detecção de frutas (MaskRCNN-D) e os modelos de identificação
da região mais ampla (VGG-16 aprimorado), mostra um erro de localização
notavelmente baixo, alcançando até 11.3 pixels de RMSE em uma imagem
de morango cortada de 224 × 224. Finalmente, explorou-se o desafio de
aprimorar a qualidade das leituras de dados do espectrômetro através do
posicionamento automático do sensor. Para tal, projetou-se e treinou-se um
modelo de Aprendizado Profundo com dados simulados, capaz de prever
a acurácia do sensor com base em uma imagem dada de um morango e o
deslocamento desejado da posição do sensor. Usando este modelo, calcula-se
o gradiente da saída de acurácia em relação à entrada de deslocamento. Isso
resulta em um vetor indicando a direção e magnitude com que o sensor deve
ser movido para melhorar a acurácia do sinal do sensor. Propôs-se então
uma solução de Servo Visão baseada neste vetor, obtendo um aumento
significativo na acurácia média do sensor e melhoria na consistência em
novas iterações simuladas. / [en] The present work begins with an investigation into the use of 3D Deep
Learning models for enhanced strawberry detection in polytunnels. We
focus on two main tasks: firstly, fruit detection, comparing the standard
MaskRCNN with an adapted version that integrates depth information
(MaskRCNN-D). Both models are capable of classifying strawberries based
on their maturity (ripe, unripe) and health status (affected by disease or
fungus). Secondly, we focus on identifying the widest region of strawberries,
fulfilling a requirement for a spectrometer system capable of measuring
their sugar content. In this task, we compare a contour-based algorithm
with an enhanced version of the VGG-16 model. Our findings demonstrate
that integrating depth data into the MaskRCNN-D results in up to a
13.7 percent improvement in mAP across various strawberry test sets, including
simulated ones, emphasizing the model s effectiveness in both real-world
and simulated agricultural scenarios. Furthermore, our end-to-end pipeline
approach, which combines the fruit detection (MaskRCNN-D) and widest
region identification models (enhanced VGG-16), shows a remarkably low
localization error, achieving down to 11.3 pixels of RMSE in a 224 × 224
strawberry cropped image. Finally, we explore the challenge of enhancing
the quality of the data readings from the spectrometer through automatic
sensor positioning. To this end, we designed and trained a Deep Learning
model with simulated data, capable of predicting the sensor accuracy based
on a given image of the strawberry and the subsequent displacement of
the sensor s position. Using this model, we calculate the gradient of the
accuracy output with respect to the displacement input. This results in a
vector indicating the direction and magnitude with which the sensor should
be moved to improve the sensor signal accuracy. A Visual Servoing solution
based on this vector provided a significant increase in the average sensor
accuracy and improvement in consistency across new simulated iterations.
|
70 |
<b>Machine-Learning-Aided Development of Surrogate Models for Flexible Design Optimization of Enhanced Heat Transfer Surfaces</b>Saeel Shrivallabh Pai (20692082) 10 February 2025 (has links)
<p dir="ltr">Due to the end of Dennard scaling, electronic devices must consume more electrical power for increased functionality. The increased power consumption, combined with diminishing form factors, results in increased power density within the device, leading to increased heat fluxes at the devices surfaces. Without proper thermal management, the increase in heat fluxes can cause device temperatures to exceed operational limits, ultimately resulting in device failure. However, the dissipation of these high heat fluxes often requires pumping or refrigeration of a coolant, which in turn, increases the total energy usage. Data centers, which form the backbone of the cloud infrastructure and the modern economy, account for ~2% of the total US electricity use, of which up to ~40% is spent on cooling needs alone. Thus, it is necessary to optimize the designs of the cooling systems to be able to dissipate higher heat fluxes, but at lower operating powers.</p><p dir="ltr">The design optimization of various thermal management components such as cold plates, heat sinks, and heat exchangers relies on accurate prediction of flow heat transfer and pressure drop. During the iterative design process, the heat transfer and pressure drop is typically either computed numerically or obtained using geometry-specific correlations for Nusselt number (<i>Nu</i>) and friction factor (<i>f</i>). Numerical approaches are accurate for evaluation of a single design but become computationally expensive if many design iterations are required (such as during formal optimization processes). Moreover, traditional empirical correlations are highly geometry dependent and assume functional forms that could introduce inaccuracies. To overcome these limitations, this thesis introduces accurate and continuous-valued machine-learning (ML)-based surrogate models for predicting Nusselt number and friction factor on various heat exchange surfaces. These surrogate models, which are applicable to more geometries than traditional correlations, enable flexible and computationally inexpensive design optimization. The utility of these surrogate models is first demonstrated through the optimization of single-phase liquid cold plates under specific boundary conditions. Subsequently, their effectiveness is further showcased in the more practical challenge of designing liquid-to-liquid heat exchangers by integrating the surrogate models with a homogenization-based topology optimization framework. As topology optimization relies heavily on accurate predictions of pressure drop and heat transfer at every point in the domain during each iteration, using ML-based surrogate models greatly reduces the computational cost while enabling the development of high-performance, customized heat exchange surfaces. Thus, this work contributes to the advancement of thermal management by leveraging machine learning techniques for efficient and flexible design optimization processes.</p><p dir="ltr">First, artificial neural network (ANN)-based surrogate correlations are developed to predict <i>f</i> and <i>Nu</i> for fully developed internal flow in channels of arbitrary cross section. This effectively collapses all known correlations for channels of different cross section shapes into one correlation for <i>f</i> and one for <i>Nu</i>. The predictive performance and generality of the ANN-based surrogate models is verified on various shapes outside the training dataset, and then the models are used in the design optimization of flow cross sections based on performance metrics that weigh both heat transfer and pressure drop. The optimization process leads to novel shapes outside the training data, the performance of which is validated through numerical simulations. Although the ML model predictions lose accuracy outside the training set for these novel shapes, the predictions are shown to follow the correct trends with parametric variations of the shape and therefore successfully direct the search toward optimized shapes.</p><p dir="ltr">The success of ANN-aided shape optimization of constant cross-section internal flow channels serves as a compelling proof-of-concept, highlighting the potential of ML-aided optimization in thermal-fluid applications. However, to address the complexities of widely used thermal management devices such as cold plates and heat exchangers, known for their intricate surface geometries beyond constant cross-section channels, a strategic shift is imperative. With the goal of crafting ML models specifically tailored for practical design optimization algorithms like topology optimization, the thesis next delves into diverse micro-pin fin arrangements commonly employed in applications like cold plates and heat exchangers. This study on pin fins includes the exploration of hydrodynamic and thermal developing effects, as well as the impact of pin fin cross section shape and orientation. The ML-based predictive models are trained on numerically simulated synthetic data. The large amounts of accurate synthetic data required to train machine learning models are generated using a custom-developed simulation automation framework. With this framework, numerical flow and heat transfer simulations can be run on thousands of geometries and boundary conditions with minimal user intervention. The proposed models provide accurate predictions of <i>f</i> and <i>Nu</i>, with a near exact match to the training data as well as on unseen testing data. Furthermore, the outputs of the ANNs are inspected to propose new analytical correlations to estimate the hydrodynamic and thermal entrance lengths for flow through square pin fin arrays. The ML models are also shown to be useable for fluids other than water, employing physics-based, Prandtl-number-dependent scaling relations.</p><p dir="ltr">The thesis further demonstrates the utility of the ML surrogate models to facilitate the design optimization of thermal management components through their integration in the topology optimization (TO) framework for heat exchanger design. Topology optimization is a computational design methodology for determining the optimal material distribution within a design space based on given constraints. The use of topology optimization in the design of heat exchangers and other thermal management devices has been gaining significant attention in recent years, particularly with the widespread availability of additive manufacturing techniques that offer geometric design flexibility. Particularly advantageous for heat exchanger design is the homogenization approach to topology optimization, which represents partial densities in the design domain using a physical unit cell structure to achieve sub-grid resolution features. This approach requires geometry-specific, correlations for <i>f</i> and <i>Nu</i> to simulate the performance of designs and evaluate the objective function during the optimization process. Topology optimized pin fin-based component designs rely on additive manufacturing, posing production scalability challenges with current technologies. Furthermore, the demand for flow and thermal anisotropy in several applications adds complexity to the design requirements. To address these challenges, the focus is shifted to traditional heat exchanger surface geometries that can be manufactured using conventional techniques, and which also exhibit pronounced anisotropy in flow and heat transfer characteristics. Traditionally, these geometries are distributed uniformly across heat exchange surfaces. However, incorporating such geometries into the topology optimization framework merges the strengths of both approaches, yielding mathematically optimized heat exchange surfaces with conventionally manufacturable designs. Offset strip fins, one such commonly used geometry, is chosen to be the physical unit cell structure to demonstrate the integration of ML-based surrogate models into the topology optimization framework. The large amount of data required to develop robust machine learning-based surrogate <i>f</i> and <i>Nu</i> models for axial and cross flow of water through offset strip fins are generated through numerical simulations performed for convective flows through these geometries. The data generated are compared against in-house-measured experimental data as well as against data from literature. To facilitate the integration of ML models into topology optimization, a discrete adjoint method was developed to calculate the sensitivities during topology optimization, to circumvent the absence of the analytical gradients.</p><p dir="ltr">Successful integration of the machine learning-based surrogate models into the topology optimization framework was demonstrated through the design optimization of a counterflow heat exchanger. The topology optimized design outperformed the benchmarks that used uniform, parametrically optimized offset strip fin arrays. The topology optimized design exhibited domain-specific enhancements such as peripheral flow paths for enhanced heat transfer and open channels to minimize pressure drops. This integration showcases the potential of combining ML models with topology optimization, providing a flexible framework that can be extended to a wide range of enhanced surface structure types and geometric configurations for which ML models can be trained. Thus, by enabling spatially localized optimization of enhanced surface structures using ML models, and consequently offering a pathway for expanding the design space to include many more surface structures in the topology optimization framework than previously possible, this thesis lays the foundation for advancing design optimization of thermal-fluid components and systems, using both additively and conventionally manufacturable geometries.</p>
|
Page generated in 0.0764 seconds