• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 60
  • 60
  • 60
  • 60
  • 60
  • 24
  • 22
  • 20
  • 20
  • 20
  • 17
  • 15
  • 15
  • 15
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Decomposition and Stability of Multiparameter Persistence Modules

Cheng Xin (16750956) 04 August 2023 (has links)
<p>The only datasets used in my thesis work are from TUDatasets, <a href="https://chrsmrrs.github.io/datasets/">TUDataset | TUD Benchmark datasets (chrsmrrs.github.io)</a>, a collection of public benchmark datasets for graph classification and regression.</p><p><br></p>
32

INTELLIGENT SOLID WASTE CLASSIFICATION SYSTEM USING DEEP LEARNING

Michel K Mudemfu (13558270) 31 July 2023 (has links)
<p>  </p> <p>The proper classification and disposal of waste are crucial in reducing environmental impacts and promoting sustainability. Several solid waste classification systems have been developed over the years, ranging from manual sorting to mechanical and automated sorting. Manual sorting is the oldest and most commonly used method, but it is time-consuming and labor-intensive. Mechanical sorting is a more efficient and cost-effective method, but it is not always accurate, and it requires constant maintenance. Automated sorting systems use different types of sensors and algorithms to classify waste, making them more accurate and efficient than manual and mechanical sorting systems. In this thesis, we propose the development of an intelligent solid waste detection, classification and tracking system using artificial deep learning techniques. To address the limited samples in the TrashNetV2 dataset and enhance model performance, a data augmentation process was implemented. This process aimed to prevent overfitting and mitigate data scarcity issues while improving the model's robustness. Various augmentation techniques were employed, including random rotation within a range of -20° to 20° to account for different orientations of the recycled materials. A random blur effect of up to 1.5 pixels was used to simulate slight variations in image quality that can arise during image acquisition. Horizontal and vertical flipping of images were applied randomly to accommodate potential variations in the appearance of recycled materials based on their orientation within the image. Additionally, the images were randomly scaled to 416 by 416 pixels, maintaining a consistent image size while increasing the dataset's overall size. Further variability was introduced through random cropping, with a minimum zoom level of 0% and a maximum zoom level of 25%. Lastly, hue variations within a range of -20° to 20° were randomly introduced to replicate lighting condition variations that may occur during image acquisition. These augmentation techniques collectively aimed to improve the dataset's diversity and the model's performance. In this study, YOLOv8, EfficientNet-B0 and VGG16 architectures were evaluated, and stochastic gradient descent (SGD) and Adam were used as the optimizer. Although, SGD provided better test accuracies compared to Adam. </p> <p>Among the three models, YOLOv8 showed the best performance, with the highest average precision mAP of 96.5%. YOLOv8 emerges as the top performer, with ROC values varying from 92.70% (Metal) to 98.40% (Cardboard). Therefore, the YOLOv8 model outperforms both VGG16 and EfficientNet in terms of ROC values and mAP. The findings demonstrate that our novel classifier tracker system made of YOLOv8, and supervision algorithms surpass conventional deep learning methods in terms of precision, resilience, and generalization ability. Our contribution to waste management is in the development and implementation of an intelligent solid waste detection, classification, and tracking system using computer vision and deep learning techniques. By utilizing computer vision and deep learning algorithms, our system can accurately detect, classify, and localize various types of solid waste on a moving conveyor, including cardboard, glass, metal, paper, and plastic. This can significantly improve the efficiency and accuracy of waste sorting processes.</p> <p>This research provides a promising solution for detection, classification, localization, and tracking of solid waste materials in real time system, which can be further integrated into existing waste management systems. Through comprehensive experimentation and analysis, we demonstrate the superiority of our approach over traditional methods, with higher accuracy and faster processing times. Our findings provide a compelling case for the implementation of intelligent solid waste sorting.</p>
33

Rewiring Police Officer Training Networks to Reduce Forecasted Use of Force

Ritika Pandey (9147281) 30 August 2023 (has links)
<p><br></p> <p>Police use of force has become a topic of significant concern, particularly given the disparate impact on communities of color. Research has shown that police officer involved shootings, misconduct and excessive use of force complaints exhibit network effects, where officers are at greater risk of being involved in these incidents when they socialize with officers who have a history of use of force and misconduct. Given that use of force and misconduct behavior appear to be transmissible across police networks, we are attempting to address if police networks can be altered to reduce use of force and misconduct events in a limited scope.</p> <p><br></p> <p>In this work, we analyze a novel dataset from the Indianapolis Metropolitan Police Department on officer field training, subsequent use of force, and the role of network effects from field training officers. We construct a network survival model for analyzing time-to-event of use of force incidents involving new police trainees. The model includes network effects of the diffusion of risk from field training officers (FTOs) to trainees. We then introduce a network rewiring algorithm to maximize the expected time to use of force events upon completion of field training. We study several versions of the algorithm, including constraints that encourage demographic diversity of FTOs. The results show that FTO use of force history is the best predictor of trainee's time to use of force in the survival model and rewiring the network can increase the expected time (in days) of a recruit's first use of force incident by 8%. </p> <p>We then discuss the potential benefits and challenges associated with implementing such an algorithm in practice.</p> <p><br></p>
34

PHYSICS-INFORMED NEURAL NETWORK SOLUTION OF POINT KINETICS EQUATIONS FOR PUR-1 DIGITAL TWIN

Konstantinos Prantikos (14196773) 01 December 2022 (has links)
<p>  </p> <p>A <em>digital twin</em> (DT), which keeps track of nuclear reactor history to provide real-time predictions, has been recently proposed for nuclear reactor monitoring. A digital twin can be implemented using either a differential equations-based physics model, or a data-driven machine learning model<strong>. </strong>The principal challenge in physics model-based DT consists of achieving sufficient model fidelity to represent a complex experimental system, while the main challenge in data-driven DT appears in the extensive training requirements and potential lack of predictive ability. </p> <p>In this thesis, we investigate the performance of a hybrid approach, which is based on physics-informed neural networks (PINNs) that encode fundamental physical laws into the loss function of the neural network. In this way, PINNs establish theoretical constraints and biases to supplement measurement data and provide solution to several limitations of purely data-driven machine learning (ML) models. We develop a PINN model to solve the point kinetic equations (PKEs), which are time dependent stiff nonlinear ordinary differential equations that constitute a nuclear reactor reduced-order model under the approximation of ignoring the spatial dependence of the neutron flux. PKEs portray the kinetic behavior of the system, and this kind of approach is the basis for most analyses of reactor systems, except in cases where flux shapes are known to vary with time. This system describes the nuclear parameters such as neutron density concentration, the delayed neutron precursor density concentration and reactivity. Both neutron density and delayed neutron precursor density concentrations are the vital parameters for safety and the transient behavior of the reactor power. </p> <p>The PINN model solution of PKEs is developed to monitor a start-up transient of the Purdue University Reactor Number One (PUR-1) using experimental parameters for the reactivity feedback schedule and the neutron source. The facility under modeling, PUR-1, is a pool type small research reactor located in West Lafayette Indiana. It is an all-digital light water reactor (LWR) submerged into a deep-water pool and has a power output of 10kW. The results demonstrate strong agreement between the PINN solution and finite difference numerical solution of PKEs. We investigate PINNs performance in both data interpolation and extrapolation. </p> <p>The findings of this thesis research indicate that the PINN model achieved highest performance and lowest errors in data interpolation. In the case of extrapolation data, three different test cases were considered, the first where the extrapolation is performed in a five-seconds interval, the second where the extrapolation is performed in a 10-seconds interval, and the third where the extrapolation is performed in a 15-seconds interval. The extrapolation errors are comparable to those of interpolation predictions. Extrapolation accuracy decreases with increasing time interval.</p>
35

Graph Matching Based on a Few Seeds: Theoretical Algorithms and Graph Neural Network Approaches

Liren Yu (17329693) 03 November 2023 (has links)
<p dir="ltr">Since graphs are natural representations for encoding relational data, the problem of graph matching is an emerging task and has attracted increasing attention, which could potentially impact various domains such as social network de-anonymization and computer vision. Our main interest is designing polynomial-time algorithms for seeded graph matching problems where a subset of pre-matched vertex-pairs (seeds) is revealed. </p><p dir="ltr">However, the existing work does not fully investigate the pivotal role of seeds and falls short of making the most use of the seeds. Notably, the majority of existing hand-crafted algorithms only focus on using ``witnesses'' in the 1-hop neighborhood. Although some advanced algorithms are proposed to use multi-hop witnesses, their theoretical analysis applies only to \ER random graphs and requires seeds to be all correct, which often do not hold in real applications. Furthermore, a parallel line of research, Graph Neural Network (GNN) approaches, typically employs a semi-supervised approach, which requires a large number of seeds and lacks the capacity to distill knowledge transferable to unseen graphs.</p><p dir="ltr">In my dissertation, I have taken two approaches to address these limitations. In the first approach, we study to design hand-crafted algorithms that can properly use multi-hop witnesses to match graphs. We first study graph matching using multi-hop neighborhoods when partially-correct seeds are provided. Specifically, consider two correlated graphs whose edges are sampled independently from a parent \ER graph $\mathcal{G}(n,p)$. A mapping between the vertices of the two graphs is provided as seeds, of which an unknown fraction is correct. We first analyze a simple algorithm that matches vertices based on the number of common seeds in the $1$-hop neighborhoods, and then further propose a new algorithm that uses seeds in the $D$-hop neighborhoods. We establish non-asymptotic performance guarantees of perfect matching for both $1$-hop and $2$-hop algorithms, showing that our new $2$-hop algorithm requires substantially fewer correct seeds than the $1$-hop algorithm when graphs are sparse. Moreover, by combining our new performance guarantees for the $1$-hop and $2$-hop algorithms, we attain the best-known results (in terms of the required fraction of correct seeds) across the entire range of graph sparsity and significantly improve the previous results. We then study the role of multi-hop neighborhoods in matching power-law graphs. Assume that two edge-correlated graphs are independently edge-sampled from a common parent graph with a power-law degree distribution. A set of correctly matched vertex-pairs is chosen at random and revealed as initial seeds. Our goal is to use the seeds to recover the remaining latent vertex correspondence between the two graphs. Departing from the existing approaches that focus on the use of high-degree seeds in $1$-hop neighborhoods, we develop an efficient algorithm that exploits the low-degree seeds in suitably-defined $D$-hop neighborhoods. Our result achieves an exponential reduction in the seed size requirement compared to the best previously known results.</p><p dir="ltr">In the second approach, we study GNNs for seeded graph matching. We propose a new supervised approach that can learn from a training set how to match unseen graphs with only a few seeds. Our SeedGNN architecture incorporates several novel designs, inspired by our theoretical studies of seeded graph matching: 1) it can learn to compute and use witness-like information from different hops, in a way that can be generalized to graphs of different sizes; 2) it can use easily-matched node-pairs as new seeds to improve the matching in subsequent layers. We evaluate SeedGNN on synthetic and real-world graphs and demonstrate significant performance improvements over both non-learning and learning algorithms in the existing literature. Furthermore, our experiments confirm that the knowledge learned by SeedGNN from training graphs can be generalized to test graphs of different sizes and categories.</p>
36

Machine Learning of Heater Zone Sensors in Liquid Sodium Facility

Maria Pantopoulou (16494174) 06 July 2023 (has links)
<p>  </p> <p>Advanced high temperature fluid reactors (AR), such as sodium fast reactors (SFR) and molten salt cooled reactors (MSCR) are promising nuclear energy options, which offer lower levelized electricity costs compared to existing light water reactors (LWR). Increasing economic competitiveness of ARs in the open market involves developing strategies for reducing operation and maintenance (O&M) costs. Digitization of AR’s allows to implement continuous on-line monitoring paradigm to achieve early detection of incipient problems, and thus reduce O&M costs. Machine learning (ML) algorithms offer a number of advantages for reactor monitoring through anticipation of key performance variables using data-driven process models. ML model does not require detailed knowledge of the system, which could be difficult to obtain or unavailable because of commercial privacy restrictions. In addition, any data obtained from sensors or through various ML models need to be securely transmitted under all possible conditions, including those of cyber-attacks. Quantum information processing offers promising solutions to these threats by establishing secure communications, due to unique properties of entanglement and superposition in quantum physics. More specifically, quantum key distribution (QKD) algorithms can be used to generate and transmit keys between the reactor and a remote user. In one of popular QKD communication protocols, BB84, the symmetric keys are paired with an advanced encryption standard (AES) protocol protecting the information. Another challenge in sensor measurements is the noise, which can affect the accuracy and reliability of the measured values. The presence of noise in sensor measurements can lead to incorrect interpretations of the data, and therefore, it is crucial to develop effective signal processing techniques to improve the quality of measurements. </p> <p>In this study, we develop several variations of Recurrent Neural Networks (RNN) and test their ability to predict future values of thermocouple measurements. Data obtained by a heat-up experiment conducted in a liquid sodium experimental facility is used for training and testing the RNNs. The method of extrapolation is also explored using measurements of different sensors to train and test a network. We then examine through computer simulations the potential of secure real-time communication of monitoring information using the BB84 protocol. Finally, signal analysis is performed with Discrete Fourier Transform (DFT) sensor signals to analyze and correlate the prediction results with the results obtained by the analysis of the time series in the frequency domain. Using information from the frequency analysis, we apply cutoff filters in the original time series and test again the performance of the networks. Results show that the ML models developed in this work can be efficiently used for forecasting of thermocouple measurements, as they provide Root Mean Square Error (RMSE) values lower than the measurement uncertainty of the thermocouples. Extrapolation produces good results, with performance related to the Euclidean distance between the sets of time series. Moreover, the results from the utilization of the BB84 protocol to securely transmit the measurements prove the feasibility of secure real-time communication of monitoring information. The application of the cutoff filters provided more accurate predictions of the thermocouple measurements than in the case of the unfiltered signals.</p> <p>The suit of computational tools developed in this work is shown to be efficient and promises to have a positive impact on improving performance of an AR.</p>
37

Machine Learning-Based Predictive Methods for Polyphase Motor Condition Monitoring

David Matthew LeClerc (13048125) 29 July 2022 (has links)
<p>  This paper explored the application of three machine learning models focused on predictive motor maintenance. Logistic Regression, Sequential Minimal Optimization (SMO), and NaïveBayes models. A comparative analysis of these models illustrated that while each had an accuracy greater than 95% in this study, the Logistic Regression Model exhibited the most reliable operation.</p>
38

TEMPORAL EVENT MODELING OF SOCIAL HARM WITH HIGH DIMENSIONAL AND LATENT COVARIATES

Xueying Liu (13118850) 09 September 2022 (has links)
<p>    </p> <p>The counting process is the fundamental of many real-world problems with event data. Poisson process, used as the background intensity of Hawkes process, is the most commonly used point process. The Hawkes process, a self-exciting point process fits to temporal event data, spatial-temporal event data, and event data with covariates. We study the Hawkes process that fits to heterogeneous drug overdose data via a novel semi-parametric approach. The counting process is also related to survival data based on the fact that they both study the occurrences of events over time. We fit a Cox model to temporal event data with a large corpus that is processed into high dimensional covariates. We study the significant features that influence the intensity of events. </p>
39

Towards Building a High-Performance Intelligent Radio Network through Deep Learning: Addressing Data Privacy, Adversarial Robustness, Network Structure, and Latency Requirements.

Abu Shafin Moham Mahdee Jameel (18424200) 26 April 2024 (has links)
<p dir="ltr">With the increasing availability of inexpensive computing power in wireless radio network nodes, machine learning based models are being deployed in operations that traditionally relied on rule-based or statistical methods. Contemporary high bandwidth networks enable easy availability of significant amounts of training data in a comparatively short time, aiding in the development of better deep learning models. Specialized deep learning models developed for wireless networks have been shown to consistently outperform traditional methods in a variety of wireless network applications.</p><p><br></p><p dir="ltr">We aim to address some of the unique challenges inherent in the wireless radio communication domain. Firstly, as data is transmitted over the air, data privacy and adversarial attacks pose heightened risks. Secondly, due to the volume of data and the time-sensitive nature of the processing that is required, the speed of the machine learning model becomes a significant factor, often necessitating operation within a latency constraint. Thirdly, the impact of diverse and time-varying wireless environments means that any machine learning model also needs to be generalizable. The increasing computing power present in wireless nodes provides an opportunity to offload some of the deep learning to the edge, which also impacts data privacy.</p><p><br></p><p dir="ltr">Towards this goal, we work on deep learning methods that operate along different aspects of a wireless network—on network packets, error prediction, modulation classification, and channel estimation—and are able to operate within the latency constraint, while simultaneously providing better privacy and security. After proposing solutions that work in a traditional centralized learning environment, we explore edge learning paradigms where the learning happens in distributed nodes.</p>
40

LEARNING OBJECTIVE FUNCTIONS FOR AUTONOMOUS SYSTEMS

Zihao Liang (18966976) 03 July 2024 (has links)
<p dir="ltr">In recent years, advancements in robotics and computing power have enabled robots to master complex tasks. Nevertheless, merely executing tasks isn't sufficient for robots. To achieve higher robot autonomy, learning the objective function is crucial. Autonomous systems can effectively eliminate the need for explicit programming by autonomously learning the control objective and deriving their control policy through the observation of task demonstrations. Hence, there's a need to develop a method for robots to learn the desired objective functions. In this thesis, we address several challenges in objective learning for autonomous systems, enhancing the applicability of our method in real-world scenarios. The ultimate objective of the thesis is to create a universal objective learning approach capable of addressing a range of existing challenges in the field while emphasizing data efficiency and robustness. Hence, building upon the previously mentioned intuition, we present a framework for autonomous systems to address a variety of objective learning tasks in real-time, even in the presence of noisy data. In addition to objective learning, this framework is capable of handling various other learning and control tasks.</p><p dir="ltr">The first part of this thesis concentrates on objective learning methods, specifically inverse optimal control (IOC). Within this domain, we have made three significant contributions aimed at addressing three existing challenges in IOC: 1) learning from minimal data, 2) learning without prior knowledge of system dynamics, and 3) learning with system outputs. </p><p dir="ltr">The second part of this thesis aims to develop a unified IOC framework to address all the challenges previously mentioned. It introduces a new paradigm for autonomous systems, referred to as Online Control-Informed Learning. This paradigm aims to tackle various of learning and control tasks online with data efficiency and robustness to noisy data. Integrating optimal control theory, online state estimation techniques, and machine learning methods, our proposed paradigm offers an online learning framework capable of tackling a diverse array of learning and control tasks. These include online imitation learning, online system identification, and policy tuning on-the-fly, all with efficient use of data and computation resources while ensuring robust performance.</p>

Page generated in 0.0864 seconds