• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 39
  • 22
  • 12
  • 12
  • 12
  • 10
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Deep Learning Studies for Vision-based Condition Assessment and Attribute Estimation of Civil Infrastructure Systems

Fu-Chen Chen (7484339) 14 January 2021 (has links)
Structural health monitoring and building assessment are crucial to acquire structures’ states and maintain their conditions. Besides human-labor surveys that are subjective, time-consuming, and expensive, autonomous image and video analysis is a faster, more efficient, and non-destructive way. This thesis focuses on crack detection from videos, crack segmentation from images, and building assessment from street view images. For crack detection from videos, three approaches are proposed based on local binary pattern (LBP) and support vector machine (SVM), deep convolution neural network (DCNN), and fully-connected network (FCN). A parametric Naïve Bayes data fusion scheme is introduced that registers video frames in a spatiotemporal coordinate system and fuses information based on Bayesian probability to increase detection precision. For crack segmentation from images, the rotation-invariant property of crack is utilized to enhance the segmentation accuracy. The architectures of several approximately rotation-invariant DCNNs are discussed and compared using several crack datasets. For building assessment from street view images, a framework of multiple DCNNs is proposed to detect buildings and predict their attributes that are crucial for flood risk estimation, including founding heights, foundation types (pier, slab, mobile home, or others), building types (commercial, residential, or mobile home), and building stories. A feature fusion scheme is proposed that combines image feature with meta information to improve the predictions, and a task relation encoding network (TREncNet) is introduced that encodes task relations as network connections to enhance multi-task learning.
32

HIGH-THROUGHPUT CALCULATIONS AND EXPERIMENTATION FOR THE DISCOVERY OF REFRACTORY COMPLEX CONCENTRATED ALLOYS WITH HIGH HARDNESS

Austin M Hernandez (12468585) 27 April 2022 (has links)
<p>Ni-based superalloys continue to exert themselves as the industry standards in high stress and highly corrosive/oxidizing environments, such as are present in a gas turbine engine, due to their excellent high temperature strengths, thermal and microstructural stabilities, and oxidation and creep resistances. Gas turbine engines are essential components for energy generation and propulsion in the modern age. However, Ni-based superalloys are reaching their limits in the operating conditions of these engines due to their melting onset temperatures, which is approximately 1300 °C. Therefore, a new class of materials must be formulated to surpass the capabilities Ni-based superalloys, as increasing the operating temperature leads to increased efficiency and reductions in fuel consumption and greenhouse gas emissions. One of the proposed classes of materials is termed refractory complex concentrated alloys, or RCCAs, which consist of 4 or more refractory elements (in this study, selected from: Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, and W) in equimolar or near-equimolar proportions. So far, there have been highly promising results with these alloys, including far higher melting points than Ni-based superalloys and outstanding high-temperature strengths in non-oxidizing environments. However, improvements in room temperature ductility and high-temperature oxidation resistance are still needed for RCCAs. Also, given the millions of possible alloy compositions spanning various combinations and concentrations of refractory elements, more efficient methods than just serial experimental trials are needed for identifying RCCAs with desired properties. A coupled computational and experimental approach for exploring a wide range of alloy systems and compositions is crucial for accelerating the discovery of RCCAs that may be capable of replacing Ni-based superalloys. </p> <p>In this thesis, the CALPHAD method was utilized to generate basic thermodynamic properties of approximately 67,000 Al-bearing RCCAs. The alloys were then down-selected on the basis of certain criteria, including solidus temperature, volume percent BCC phase, and aluminum activity. Machine learning models with physics-based descriptors were used to select several BCC-based alloys for fabrication and characterization, and an active learning loop was employed to aid in rapid alloy discovery for high hardness and strength. This method resulted in rapid identification of 15 BCC-based, four component, Al-bearing RCCAs exhibiting room-temperature Vickers hardness from 1% to 35% above previously reported alloys. This work exemplifies the advantages of utilizing Integrated Computational Materials Engineering- and Materials Genome Initiative-driven approaches for the discovery and design of new materials with attractive properties.</p> <p> </p> <p><br></p>
33

Machine Learning-Based Predictive Methods for Polyphase Motor Condition Monitoring

David Matthew LeClerc (13048125) 29 July 2022 (has links)
<p>  This paper explored the application of three machine learning models focused on predictive motor maintenance. Logistic Regression, Sequential Minimal Optimization (SMO), and NaïveBayes models. A comparative analysis of these models illustrated that while each had an accuracy greater than 95% in this study, the Logistic Regression Model exhibited the most reliable operation.</p>
34

Automatic Burns Analysis Using Machine Learning

Abubakar, Aliyu January 2022 (has links)
Burn injuries are a significant global health concern, causing high mortality and morbidity rates. Clinical assessment is the current standard for diagnosing burn injuries, but it suffers from interobserver variability and is not suitable for intermediate burn depths. To address these challenges, machine learning-based techniques were proposed to evaluate burn wounds in a thesis. The study utilized image-based networks to analyze two medical image databases of burn injuries from Caucasian and Black-African cohorts. The deep learning-based model, called BurnsNet, was developed and used for real-time processing, achieving high accuracy rates in discriminating between different burn depths and pressure ulcer wounds. The multiracial data representation approach was also used to address data representation bias in burn analysis, resulting in promising performance. The ML approach proved its objectivity and cost-effectiveness in assessing burn depths, providing an effective adjunct for clinical assessment. The study's findings suggest that the use of machine learning-based techniques can reduce the workflow burden for burn surgeons and significantly reduce errors in burn diagnosis. It also highlights the potential of automation to improve burn care and enhance patients' quality of life. / Petroleum Technology Development Fund (PTDF); Gombe State University study fellowship
35

<b>Machine-Learning-Aided Development of Surrogate Models for Flexible Design Optimization of Enhanced Heat Transfer Surfaces</b>

Saeel Shrivallabh Pai (20692082) 10 February 2025 (has links)
<p dir="ltr">Due to the end of Dennard scaling, electronic devices must consume more electrical power for increased functionality. The increased power consumption, combined with diminishing form factors, results in increased power density within the device, leading to increased heat fluxes at the devices surfaces. Without proper thermal management, the increase in heat fluxes can cause device temperatures to exceed operational limits, ultimately resulting in device failure. However, the dissipation of these high heat fluxes often requires pumping or refrigeration of a coolant, which in turn, increases the total energy usage. Data centers, which form the backbone of the cloud infrastructure and the modern economy, account for ~2% of the total US electricity use, of which up to ~40% is spent on cooling needs alone. Thus, it is necessary to optimize the designs of the cooling systems to be able to dissipate higher heat fluxes, but at lower operating powers.</p><p dir="ltr">The design optimization of various thermal management components such as cold plates, heat sinks, and heat exchangers relies on accurate prediction of flow heat transfer and pressure drop. During the iterative design process, the heat transfer and pressure drop is typically either computed numerically or obtained using geometry-specific correlations for Nusselt number (<i>Nu</i>) and friction factor (<i>f</i>). Numerical approaches are accurate for evaluation of a single design but become computationally expensive if many design iterations are required (such as during formal optimization processes). Moreover, traditional empirical correlations are highly geometry dependent and assume functional forms that could introduce inaccuracies. To overcome these limitations, this thesis introduces accurate and continuous-valued machine-learning (ML)-based surrogate models for predicting Nusselt number and friction factor on various heat exchange surfaces. These surrogate models, which are applicable to more geometries than traditional correlations, enable flexible and computationally inexpensive design optimization. The utility of these surrogate models is first demonstrated through the optimization of single-phase liquid cold plates under specific boundary conditions. Subsequently, their effectiveness is further showcased in the more practical challenge of designing liquid-to-liquid heat exchangers by integrating the surrogate models with a homogenization-based topology optimization framework. As topology optimization relies heavily on accurate predictions of pressure drop and heat transfer at every point in the domain during each iteration, using ML-based surrogate models greatly reduces the computational cost while enabling the development of high-performance, customized heat exchange surfaces. Thus, this work contributes to the advancement of thermal management by leveraging machine learning techniques for efficient and flexible design optimization processes.</p><p dir="ltr">First, artificial neural network (ANN)-based surrogate correlations are developed to predict <i>f</i> and <i>Nu</i> for fully developed internal flow in channels of arbitrary cross section. This effectively collapses all known correlations for channels of different cross section shapes into one correlation for <i>f</i> and one for <i>Nu</i>. The predictive performance and generality of the ANN-based surrogate models is verified on various shapes outside the training dataset, and then the models are used in the design optimization of flow cross sections based on performance metrics that weigh both heat transfer and pressure drop. The optimization process leads to novel shapes outside the training data, the performance of which is validated through numerical simulations. Although the ML model predictions lose accuracy outside the training set for these novel shapes, the predictions are shown to follow the correct trends with parametric variations of the shape and therefore successfully direct the search toward optimized shapes.</p><p dir="ltr">The success of ANN-aided shape optimization of constant cross-section internal flow channels serves as a compelling proof-of-concept, highlighting the potential of ML-aided optimization in thermal-fluid applications. However, to address the complexities of widely used thermal management devices such as cold plates and heat exchangers, known for their intricate surface geometries beyond constant cross-section channels, a strategic shift is imperative. With the goal of crafting ML models specifically tailored for practical design optimization algorithms like topology optimization, the thesis next delves into diverse micro-pin fin arrangements commonly employed in applications like cold plates and heat exchangers. This study on pin fins includes the exploration of hydrodynamic and thermal developing effects, as well as the impact of pin fin cross section shape and orientation. The ML-based predictive models are trained on numerically simulated synthetic data. The large amounts of accurate synthetic data required to train machine learning models are generated using a custom-developed simulation automation framework. With this framework, numerical flow and heat transfer simulations can be run on thousands of geometries and boundary conditions with minimal user intervention. The proposed models provide accurate predictions of <i>f</i> and <i>Nu</i>, with a near exact match to the training data as well as on unseen testing data. Furthermore, the outputs of the ANNs are inspected to propose new analytical correlations to estimate the hydrodynamic and thermal entrance lengths for flow through square pin fin arrays. The ML models are also shown to be useable for fluids other than water, employing physics-based, Prandtl-number-dependent scaling relations.</p><p dir="ltr">The thesis further demonstrates the utility of the ML surrogate models to facilitate the design optimization of thermal management components through their integration in the topology optimization (TO) framework for heat exchanger design. Topology optimization is a computational design methodology for determining the optimal material distribution within a design space based on given constraints. The use of topology optimization in the design of heat exchangers and other thermal management devices has been gaining significant attention in recent years, particularly with the widespread availability of additive manufacturing techniques that offer geometric design flexibility. Particularly advantageous for heat exchanger design is the homogenization approach to topology optimization, which represents partial densities in the design domain using a physical unit cell structure to achieve sub-grid resolution features. This approach requires geometry-specific, correlations for <i>f</i> and <i>Nu</i> to simulate the performance of designs and evaluate the objective function during the optimization process. Topology optimized pin fin-based component designs rely on additive manufacturing, posing production scalability challenges with current technologies. Furthermore, the demand for flow and thermal anisotropy in several applications adds complexity to the design requirements. To address these challenges, the focus is shifted to traditional heat exchanger surface geometries that can be manufactured using conventional techniques, and which also exhibit pronounced anisotropy in flow and heat transfer characteristics. Traditionally, these geometries are distributed uniformly across heat exchange surfaces. However, incorporating such geometries into the topology optimization framework merges the strengths of both approaches, yielding mathematically optimized heat exchange surfaces with conventionally manufacturable designs. Offset strip fins, one such commonly used geometry, is chosen to be the physical unit cell structure to demonstrate the integration of ML-based surrogate models into the topology optimization framework. The large amount of data required to develop robust machine learning-based surrogate <i>f</i> and <i>Nu</i> models for axial and cross flow of water through offset strip fins are generated through numerical simulations performed for convective flows through these geometries. The data generated are compared against in-house-measured experimental data as well as against data from literature. To facilitate the integration of ML models into topology optimization, a discrete adjoint method was developed to calculate the sensitivities during topology optimization, to circumvent the absence of the analytical gradients.</p><p dir="ltr">Successful integration of the machine learning-based surrogate models into the topology optimization framework was demonstrated through the design optimization of a counterflow heat exchanger. The topology optimized design outperformed the benchmarks that used uniform, parametrically optimized offset strip fin arrays. The topology optimized design exhibited domain-specific enhancements such as peripheral flow paths for enhanced heat transfer and open channels to minimize pressure drops. This integration showcases the potential of combining ML models with topology optimization, providing a flexible framework that can be extended to a wide range of enhanced surface structure types and geometric configurations for which ML models can be trained. Thus, by enabling spatially localized optimization of enhanced surface structures using ML models, and consequently offering a pathway for expanding the design space to include many more surface structures in the topology optimization framework than previously possible, this thesis lays the foundation for advancing design optimization of thermal-fluid components and systems, using both additively and conventionally manufacturable geometries.</p>
36

Leakage Conversion For Training Machine Learning Side Channel Attack Models Faster

Rohan Kumar Manna (8788244) 01 May 2020 (has links)
Recent improvements in the area of Internet of Things (IoT) has led to extensive utilization of embedded devices and sensors. Hence, along with utilization the need for safety and security of these devices also increases proportionately. In the last two decades, the side-channel attack (SCA) has become a massive threat to the interrelated embedded devices. Moreover, extensive research has led to the development of many different forms of SCA for extracting the secret key by utilizing the various leakage information. Lately, machine learning (ML) based models have been more effective in breaking complex encryption systems than the other types of SCA models. However, these ML or DL models require a lot of data for training that cannot be collected while attacking a device in a real-world situation. Thus, in this thesis, we try to solve this issue by proposing the new technique of leakage conversion. In this technique, we try to convert the high signal to noise ratio (SNR) power traces to low SNR averaged electromagnetic traces. In addition to that, we also show how artificial neural networks (ANN) can learn various non-linear dependencies of features in leakage information, which cannot be done by adaptive digital signal processing (DSP) algorithms. Initially, we successfully convert traces in the time interval of 80 to 200 as the cryptographic operations occur in that time frame. Next, we show the successful conversion of traces lying in any time frame as well as having a random key and plain text values. Finally, to validate our leakage conversion technique and the generated traces we successfully implement correlation electromagnetic analysis (CEMA) with an approximate minimum traces to disclosure (MTD) of 480.
37

ENABLING RIDE-SHARING IN ON-DEMAND AIR SERVICE OPERATIONS THROUGH REINFORCEMENT LEARNING

Apoorv Maheshwari (11564572) 22 November 2021 (has links)
The convergence of various technological and operational advancements has reinstated the interest in On-Demand Air Service (ODAS) as a viable mode of transportation. ODAS enables an end-user to be transported in an aircraft between their desired origin and destination at their preferred time without advance notice. Industry, academia, and the government organizations are collaborating to create technology solutions suited for large-scale implementation of this mode of transportation. Market studies suggest reducing vehicle operating cost per passenger as one of the biggest enablers of this market. To enable ODAS, an ODAS operator controls a fleet of aircraft that are deployed across a set of nodes (e.g., airports, vertiports) to satisfy end-user transportation requests. There is a gap in the literature for a tractable and online methodology that can enable ride-sharing in the on-demand operations while maintaining a publicly acceptable level of service (such as with low waiting time). The need for an approach that not only supports a dynamic-stochastic formulation but can also handle uncertainty with unknowable properties, drives me towards the field of Reinforcement Learning (RL). In this work, a novel two-layer hierarchical RL framework is proposed that can distribute a fleet of aircraft across a nodal network as well as perform real-time scheduling for an ODAS operator. The top layer of the framework - the Fleet Distributor - is modeled as a Partially Observable Markov Decision Process whereas the lower layer - the Trip Request Manager - is modeled as a Semi-Markov Decision Process. This framework is successfully demonstrated and assessed through various studies for a hypothetical ODAS operator in the Chicago region. This approach provides a new way of solving fleet distribution and scheduling problems in aviation. It also bridges the gap between the state-of-the-art RL advancements and node-based transportation network problems. Moreover, this work provides a non-proprietary approach to reasonably model ODAS operations that can be leveraged by researchers and policy makers.
38

AUTOMATING BIG VISUAL DATA COLLECTION AND ANALYTICS TOWARD LIFECYCLE MANAGEMENT OF ENGINEERING SYSTEMS

Jongseong Choi (9011111) 09 September 2022 (has links)
Images have become a ubiquitous and efficient data form to record information. Use of this option for data capture has largely increased due to the widespread availability of image sensors and sensor platforms (e.g., smartphones and drones), the simplicity of this approach for broad groups of users, and our pervasive access to the internet as one class of infrastructure in itself. Such data contains abundant visual information that can be exploited to automate asset assessment and management tasks that traditionally are manually conducted for engineering systems. Automation of the data collection, extraction and analytics is however, key to realizing the use of these data for decision-making. Despite recent advances in computer vision and machine learning techniques extracting information from an image, automation of these real-world tasks has been limited thus far. This is partly due to the variety of data and the fundamental challenges associated with each domain. Due to the societal demands for access to and steady operation of our infrastructure systems, this class of systems represents an ideal application where automation can have high impact. Extensive human involvement is required at this time to perform everyday procedures such as organizing, filtering, and ranking of the data before executing analysis techniques, consequently, discouraging engineers from even collecting large volumes of data. To break down these barriers, methods must be developed and validated to speed up the analysis and management of data over the lifecycle of infrastructure systems. In this dissertation, big visual data collection and analysis methods are developed with the goal of reducing the burden associated with human manual procedures. The automated capabilities developed herein are focused on applications in lifecycle visual assessment and are intended to exploit large volumes of data collected periodically over time. To demonstrate the methods, various classes of infrastructure, commonly located in our communities, are chosen for validating this work because they: (i) provide commodities and service essential to enable, sustain, or enhance our lives; and (ii) require a lifecycle structural assessment in a high priority. To validate those capabilities, applications of infrastructure assessment are developed to achieve multiple approaches of big visual data such as region-of-interest extraction, orthophoto generation, image localization, object detection, and image organization using convolution neural networks (CNNs), depending on the domain of lifecycle assessment needed in the target infrastructure. However, this research can be adapted to many other applications where monitoring and maintenance are required over their lifecycle.
39

Разработка предсказательной модели стоимости недвижимости в регионе : магистерская диссертация / Development of predictive model of real estate value in the region

Слободчикова, Е. В., Slobodchikova, E. V. January 2024 (has links)
В данной магистерской диссертации рассматривается применение методов машинного обучения для прогнозирования стоимости недвижимости в Ямало-Ненецком автономном округе. В работе рассматриваются существующие алгоритмы машинного обучения, такие как линейная регрессия, случайный лес и XGBoost, анализируются критерии оценки моделей машинного обучения и факторы, влияющие на стоимость недвижимости. В процессе исследования проводится анализ данных, определение параметров, используемых в качестве признаков, и построение предсказательной модели с использованием выбранных алгоритмов машинного обучения. В результате работы оптимальной моделью для прогнозирования цен на недвижимость в регионе выбран случайный лес, так как показал наилучший результат по применяемым метрикам. / In this master's thesis the application of machine learning methods for predicting the cost of real estate in the Yamalo-Nenets Autonomous District is considered. The paper reviews existing machine learning algorithms such as linear regression, random forest and XGBoost, analyses the evaluation criteria of machine learning models and factors affecting real estate value. The research process involves analysing the data, identifying the parameters used as features and building a predictive model using the selected machine learning algorithms. As a result of the work, random forest is selected as the optimal model for predicting property prices in the region, as it showed the best result according to the applied metrics.

Page generated in 0.0759 seconds