• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 67
  • 67
  • 67
  • 67
  • 67
  • 24
  • 22
  • 22
  • 20
  • 20
  • 18
  • 18
  • 17
  • 16
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

<b>PROBABILISTIC ENSEMBLE MACHINE LEARNING APPROACHES FOR UNSTRUCTURED TEXTUAL DATA CLASSIFICATION</b>

Srushti Sandeep Vichare (17277901) 26 April 2024 (has links)
<p dir="ltr">The volume of big data has surged, notably in unstructured textual data, comprising emails, social media, and more. Currently, unstructured data represents over 80% of global data, the growth is propelled by digitalization. Unstructured text data analysis is crucial for various applications like social media sentiment analysis, customer feedback interpretation, and medical records classification. The complexity is due to the variability in language use, context sensitivity, and the nuanced meanings that are expressed in natural language. Traditional machine learning approaches, while effective in handling structured data, frequently fall short when applied to unstructured text data due to the complexities. Extracting value from this data requires advanced analytics and machine learning. Recognizing the challenges, we developed innovative ensemble approaches that combine the strengths of multiple conventional machine learning classifiers through a probabilistic approach. Response to the challenges , we developed two novel models: the Consensus-Based Integration Model (CBIM) and the Unified Predictive Averaging Model (UPAM).The CBIM and UPAM ensemble models were applied to Twitter (40,000 data samples) and the National Electronic Injury Surveillance System (NEISS) datasets (323,344 data samples) addressing various challenges in unstructured text analysis. The NEISS dataset achieved an unprecedented accuracy of 99.50%, demonstrating the effectiveness of ensemble models in extracting relevant features and making accurate predictions. The Twitter dataset, utilized for sentiment analysis, demonstrated a significant boost in accuracy over conventional approaches, achieving a maximum of 65.83%. The results highlighted the limitations of conventional machine learning approaches when dealing with complex, unstructured text data and the potential of ensemble models. The models exhibited high accuracy across various datasets and tasks, showcasing their versatility and effectiveness in obtaining valuable insights from unstructured text data. The results obtained extend the boundaries of text analysis and improve the field of natural language processing.</p>
12

TOWARDS EFFICIENT AND ROBUST DEEP LEARNING :HANDLING DATA NON-IDEALITY AND LEVERAGINGIN-MEMORY COMPUTING

Sangamesh D Kodge (19958580) 05 November 2024 (has links)
<p dir="ltr">Deep learning has achieved remarkable success across various domains, largely relyingon assumptions of ideal data conditions—such as balanced distributions, accurate labeling,and sufficient computational resources—that rarely hold in real-world applications. Thisthesis addresses the significant challenges posed by data non-idealities, including privacyconcerns, label noise, non-IID (Independent and Identically Distributed) data, and adversarial threats, which can compromise model performance and security. Additionally, weexplore the computational limitations inherent in traditional architectures by introducingin-memory computing techniques to mitigate the memory bottleneck in deep neural networkimplementations.We propose five novel contributions to tackle these challenges and enhance the efficiencyand robustness of deep learning models. First, we introduce a gradient-free machine unlearning algorithm to ensure data privacy by effectively forgetting specific classes withoutretraining. Second, we propose a corrective machine unlearning technique, SAP, that improves robustness against label noise using Scaled Activation Projections. Third, we presentthe Neighborhood Gradient Mean (NGM) method, a decentralized learning approach thatoptimizes performance on non-IID data with minimal computational overhead. Fourth, wedevelop TREND, an ensemble design strategy that leverages transferability metrics to enhance adversarial robustness. Finally, we explore an in-memory computing solution, IMAC,that enables energy-efficient and low-latency multiplication and accumulation operationsdirectly within 6T SRAM arrays.These contributions collectively advance the state-of-the-art in handling data non-idealitiesand computational efficiency in deep learning, providing robust, scalable, and privacypreserving solutions suitable for real-world deployment across diverse environments.</p>
13

<b>LIFT AND SHIFT OF MODEL CODE USING MACHINE LEARNING MICROSERVICES WITH GENERATIVE AI MAPPING LAYER IN ENTERPRISE SAAS APPLICATIONS</b>

Venkata C Duvvuri (20213724) 20 November 2024 (has links)
<p dir="ltr">In traditional Software as a Service (SaaS) enterprise applications, there is a need for easy-to-do machine learning (ML) frameworks. Additionally, SaaS applications are closely related when they form an application suite, which brings forth the need for an ML framework that can facilitate the “lift and shift” of ML model code in similar needs in multiple enterprise applications in a suite. To add to this, some SaaS applications are still using legacy infrastructure (on-premise) mandating the need for an ML framework that is backward compatible with coexisting platforms, both cloud and legacy on-premise infrastructure. This study first demonstrated that in SaaS applications, microservices were important ingredients to deploying machine learning (ML) models successfully. In general, microservices can result in efficiencies in software service design, development, and delivery. As they become ubiquitous in the redesign of monolithic software, with the addition of machine learning, the traditional SaaS applications are also becoming increasingly intelligent. Next, the dissertation recommends a portable ML microservice framework Minerva (also known as contAIn—second generation), a Micro-services-based container framework for Applied Machine learning as an efficient way to modularize and deploy intelligent microservices in both traditional “legacy” SaaS application suite and cloud, especially in the enterprise domain. The study also identified and discussed the needs, challenges, and architecture to incorporate ML microservices in such applications. Secondly, the study further identifies that there is an impetus to innovate quickly for machine learning features in enterprise SaaS applications. Minerva’s design for optimal integration with legacy and cloud applications using microservices architecture leveraging lightweight infrastructure accelerates deploying ML models in such applications. The study highlights the real-world implementation of Minerva, doubling innovation speed with the human resources. It evaluates ML model code reusability across applications, resulting in 1.15 to 2X faster adoption compared to previous methods in a marketing application suite. Minerva’s top-tier security encompasses several advanced features designed to protect sensitive data in SaaS marketing applications. It includes end-to-end data encryption, ensuring all data remains secure both in transit and at rest using robust cryptographic algorithms. While a layered design accelerated innovation through porting existing models to related business suites, generative AI methods, while promising, hadn't yielded significant gains with smaller models yet porting over already no code optimized model code.</p>
14

NONLINEAR DIFFUSIONS ON GRAPHS FOR CLUSTERING, SEMI-SUPERVISED LEARNING AND ANALYZING PREDICTIONS

Meng Liu (14075697) 09 November 2022 (has links)
<p>Graph diffusion is the process of spreading information from one or few nodes to the rest of the graph through edges. The resulting distribution of the information often implies latent structure of the graph where nodes more densely connected can receive more signal. This makes graph diffusions a powerful tool for local clustering, which is the problem of finding a cluster or community of nodes around a given set of seeds. Most existing literatures on using graph diffusions for local graph clustering are linear diffusions as their dynamics can be fully interpreted through linear systems. They are also referred as eigenvector, spectral, or random walk based methods. While efficient, they often have difficulty capturing the correct boundary of a target label or target cluster. On the contrast, maxflow-mincut based methods that can be thought as 1-norm nonlinear variants of the linear diffusions seek to "improve'' or "refine'' a given cluster and can often capture the boundary correctly. However, there is a lack of literature to adopt them for problems such as community detection, local graph clustering, semi-supervised learning, etc. due to the complexity of their formulation. We addressed these issues by performing extensive numerical experiments to demonstrate the performance of flow-based methods in graphs from various sources. We also developed an efficient LocalGraphClustering Python Package that allows others to easily use these methods in their own problems. While studying these flow-based methods, we find that they cannot grow from small seed set. Although there are hybrid procedures that incorporate ideas from both linear diffusions and flow-based methods, they have many hard to set parameters. To tackle these issues, we propose a simple generalization of the objective function behind linear diffusion and flow-based methods which we call generalized local graph min-cut problem. We further show that by involving p-norm in this cut problem, we can develop a nonlinear diffusion procedure that can find local clusters from small seed set and capture the correct boundary simultaneously. Our method can be thought as a nonlinear generalization of the Anderson-Chung-Lang push procedure to approximate a personalized PageRank vector efficiently and is a strongly local algorithm-one whose runtime depends on the size of the output rather than the size of the graph. We also show that the p-norm cut functions improve on the standard Cheeger inequalities for linear diffusion methods. We further extend our generalized local graph min-cut problem and the corresponding diffusion solver to hypergraph-based machine learning problems. Although many methods for local graph clustering exist, there are relatively few for localized clustering in hypergraphs. Moreover, those that exist often lack flexibility to model a general class of hypergraph cut functions or cannot scale to large problems. Our new hypergraph diffusion method on the other hand enables us to compute with a wide variety of cardinality-based hypergraph cut functions and still maintains the strongly local property. We also show that the clusters found by solving the new objective function satisfy a Cheeger-like quality guarantee.</p> <p>Besides clustering, recent work on graph-based learning often focuses on node embeddings and graph neural networks. Although these GNN based methods can beat traditional ones especially when node attributes data is available, it is challenging to understand them because they are highly over-parameterized. To solve this issue, we propose a novel framework that combines topological data analysis and diffusion to transform the complex prediction space into human understandable pictures. The method can be applied to other datasets not in graph formats and scales up to large datasets across different domains and enable us to find many useful insights about the data and the model.</p>
15

DEVELOPING UNIVERSAL AI/ML BENCHMARKS FOR NUCLEAR APPLICATIONS

William Stephen Richards (16388622) 31 July 2023 (has links)
<p>Recent developments in Artificial Intelligence (AI) and Machine Learning (ML) have not only revolutionized engineering but also the way humanity foresees the future with machines. From self-driving cars to large language models and ChatGPT, AI and ML will continue to redefine the boundaries of innovations and reshape the way we interact with the world. The anticipated benefits are transformative, enabling enhanced productivity, improved decision-making, and the potential for significant cost savings. These developments in AI/ML and the promise for improved reliability, anomaly detection, efficient operation, etc., have unavoidably caught the attention of nuclear engineers. Advancing nuclear predictive models and providing real-time support with regard to operation and maintenance are just a few of the potential tasks AI/ML could provide assistance. Microreactors is just one example of future nuclear systems where semi-autonomous operation and fully digital instrumentation and control with AI/ML-based decision support would be required for cost-effective deployment in remote areas.</p><p>However, the world of nuclear engineering is skeptical of the direct application of AI/ML at nuclear facilities mostly due to limited past experience, potential high risk for false negatives, and limited amount of available data to demonstrate widespread applicability with high confidence. In order to curb these worries and take advantage of recent public interest in AI/ML, publicly available, real-time datasets need to be created. In this thesis, a universal AI/ML dataset is developed takes advantage of the recent digitization of Purdue University Reactor One (PUR-1) and using real-time data directly from PUR-1. The expectation is to follow the paradigm of the AI/ML community where open datasets (e.g., Kaggle, ImageNet, etc.) were the stepping stone towards new algorithms, facilitating collaborative problem-solving, and driving breakthroughs in the field of AI/ML through open competitions and knowledge sharing.</p><p>PUR-1 is capable of providing real-time research data to the second for over 2000 different parameters ranging from physical components such as neutron flux and control rod positions to calculated signals such as the system change rate. The proposed Purdue Reactor Integrated Machine Learning dataset (PRIMaL), as described in the thesis herein, includes ten signals handpicked to create simple and of various degree of complexity AI/ML benchmarks related directly to the nuclear field, with the goal of kickstarting both a new-founded interest in the nuclear field by AI/ML professionals and building faith in AI/ML amongst nuclear engineers. To the best of our knowledge, PRIMaL is the first curated AI/ML benchmark based on real reactor data and focused on nuclear applications, aiming to advance safety, efficiency, and innovation in the nuclear industry while promoting the responsible and secure use of AI/ML technologies.</p><p>To confirm the validity of the dataset and provide a simple example on how to use the dataset for AI/ML benchmarking, an example problem of classifying shutdown data as gang lowers or SCRAM was performed using three ML algorithms: support vector machine, random forest, and logistic regression. This binary classification problem was repeated 288 times for each algorithm, varying the balance ratio of the SCRAMs to gang lowers, the time prior to the shutdown, and the time after the shutdown the algorithms have access to. The sample problem was a success, as the algorithms were able to distinguish SCRAMs and gang lowers with reasonable accuracy in all cases. Future work would include gathering more data from PUR-1 for the database, as further testing with different sized balanced datasets lead to unusually high accuracy due to the smaller sample size.</p>
16

ARTIFICIAL INTELLIGENCE-BASED SOLUTIONS FOR THE DETECTION AND MITIGATION OF JAMMING AND MESSAGE INJECTION CYBERATTACKS AGAINST UNMANNED AERIAL VEHICLES

Joshua Allen Price (15379817) 01 May 2023 (has links)
<p>This thesis explores the usage of machine learning (ML) algorithms and software-defined radio (SDR) hardware for the detection of signal jamming and message injection cyberattacks against unmanned aerial vehicle (UAV) wireless communications. In the first work presented in this thesis, a real-time ML solution for classifying four types of jamming attacks is proposed for implementation with a UAV using an onboard Raspberry Pi computer and HackRF One SDR. Also presented in this thesis is a multioutput multiclass convolutional neural network (CNN) model implemented for the purpose of identifying the direction in which a jamming sample is received from, in addition to detecting and classifying the jamming type. Such jamming types studied herein are barrage, single-tone, successive-pulse, and protocol-aware jamming. The findings of this chapter forms the basis of a reinforcement learning (RL) approach for UAV flightpath modification as the next stage of this research. The final work included in this thesis presents a ML solution for the binary classification of three different message injection attacks against ADS-B communication systems, namely path modification, velocity drift and ghost aircraft injection attacks. The collective results of these individual works demonstrate the viability of artificial-intelligence (AI) based solutions for cybersecurity applications with respect to UAV communications.</p>
17

CONTINUOUS RELAXATION FOR COMBINATORIAL PROBLEMS - A STUDY OF CONVEX AND INVEX PROGRAMS

Adarsh Barik (15359902) 27 April 2023 (has links)
<p>In this thesis, we study optimization problems which have a combinatorial aspect to them. Search space for such problems quickly grows large - exponentially - with respect to the problem dimension. Thus, exhaustive search becomes intractable and we need good relaxations to solve combinatorial problems efficiently. Another challenge arises due to the high dimensionality of such problems and lack of large number of samples. Our aim is to come up with innovative approaches that solve the problem in polynomial time and sample complexity. We discuss three combinatorial optimization problems and provide continuous relaxations for them. Our continuous relaxations involve both convex and nonconvex (invex) relaxations. Furthermore, we provide efficient first order algorithms to solve a general class of invex problems with provable convergence rate guarantees. The three combinatorial problems we study in this work are – learning the directed structure of a Bayesian network using blackbox data, fair sparse regression on a biased dataset where bias depends upon a hidden binary attribute and mixed linear regression. We propose convex relaxation for the first problem, while the other two are solved using invex relaxation. On the first problem, we come up with a novel notion of low rank representation of conditional probability tables for a Bayesian network and connect it to Fourier transformation of real valued set functions to recover the exact structure of the Bayesian networks. For the second problem, we propose a novel invex relaxation for the combinatorial version of sparse linear regression with fairness. For the final problem, we again use invex relaxation to learn a mixture of sparse linear regression models. We formally show correctness of our proposed methods and provide provable theoretical guarantees for efficient computational and sample complexity. We also develop efficient first order algorithms to solve invex problems. We provide convergence rate analysis for our proposed methods. Furthermore, we also discuss possible future research directions and the problems we want to tackle in future.</p>
18

DETERMINING MACROSCOPIC TRANSPORT PARAMETERS AND MICROBIOTA RESPONSE USING MACHINE LEARNING TECHNIQUES

Miad Boodaghidizaji (15339991) 27 April 2023 (has links)
<p>Determining the macroscopic properties such as diffusivity, concentration, and viscosity is of paramount importance to many engineering applications. The determination of macroscopic properties from experimental or numerical data is a challenging task due to the inverse nature of these problems. Data analytic techniques with recent advances in machine learning as well as optimization techniques have enabled tackling problems that were once considered impossible to solve. In the current proposal, we focus on using Bayesian and the state of the art machine learning techniques to solve three problems that involve calculations of the macroscopic transport properties. </p> <p><br></p> <p>i) We developed a Bayesian approach to estimate the diffusion coefficient of rhodamine 6G in breast cancer spheroids. Determination of the diffusivity values of drugs in tumors is crucial to understanding drug resistivity, particularly in breast cancer tumors. To this end, we invoked Bayesian inference to solve the problem of determining the light attenuation coefficient and diffusion coefficient in breast cancer spheroids for Rhodamine 6G (R6G) as a mock drug for the tyrosine kinase inhibitor, Neratinib. We noticed that the diffusion coefficient values do not noticeably vary across a HER2+ breast cancer cell line as a function of transglutaminase 2 levels, even in the presence of fibroblast cells. </p> <p><br></p> <p>ii) We developed a multi-fidelity model to predict the rheological properties of a suspension of fibers using neural networks and Gaussian processes. Determining the rheological properties of fiber suspensions is of indispensable to many industrial applications. To this end,  multi-fidelity Gaussian processes and neural networks were utilized to predict the apparent viscosity. Results indicated that with tuned hyperparameters, both the multi-fidelity Gaussian processes and neural networks lead to predictions with a high level of accuracy, where neural networks demonstrate marginally better performance.</p> <p><br></p> <p><br></p> <p>iii) We developed machine learning models to analyze measles,</p> <p>mumps, rubella, and varicella (MMRV) vaccines using Raman and absorption spectra. Monitoring the concentration of viral particles is indispensable to producing vaccines or anti-viral medications. To this end, we designed and optimized a convolutional neural network and random forest models to map spectroscopic signals to concentration values. Results indicated that when the joint Raman-absorption signals are used for training, prediction accuracies are higher, with the random forest model demonstrating marginally better performance.  </p> <p><br></p> <p>iv) We developed four machine learning models, including random forest, support vector machine, artificial neural networks, and convolutional neural networks to classify diseases using gut microbiota data. We distinguished between Parkinson’s disease, Crohn’s disease (CD), ulcerative colitis (UC), human immune deficiency virus (HIV), and healthy control (HC) subjects in the</p> <p>presence and absence of fiber treatments. Our analysis demonstrated that it would be possible to use machine learning to distinguish between healthy and non-healthy cases in addition to predicting four different types of diseases with very high accuracy. </p> <p>v</p>
19

ARTIFICIAL INTELLIGENCE-BASED GPS SPOOFING DETECTION AND IMPLEMENTATION WITH APPLICATIONS TO UNMANNED AERIAL VEHICLES

Mohammad Nayfeh (15379369) 30 April 2023 (has links)
<p>In this work, machine learning (ML) modeling is proposed for the detection and classification of global positioning system (GPS) spoofing in unmanned aerial vehicles (UAVs). Three testing scenarios are implemented in an outdoor yet controlled setup to investigate static and dynamic attacks. In these scenarios, authentic sets of GPS signal features are collected, followed by other sets obtained while the UAV is under spoofing attacks launched with a software-defined radio (SDR) transceiver module. All sets are standardized, analyzed for correlation, and reduced according to feature importance prior to their exploitation in training, validating, and testing different multiclass ML classifiers. Two schemes for the dataset are proposed, location-dependent and location-independent datasets. The location-dependent dataset keeps the location specific features which are latitude, longitude, and altitude. On the other hand, the location-independent dataset excludes these features. The resulting performance evaluation of these classifiers shows a detection rate (DR), misdetection rate (MDR), and false alarm rate (FAR) better than 92%, 13%, and 4%, respectively, together with a sub-millisecond detection time. Hence, the proposed modeling facilitates accurate real-time GPS spoofing detection and classification for UAV applications.</p> <p><br></p> <p>Then, a three-class ML model is implemented on a UAV with a Raspberry Pi processor for classifying the two GPS spoofing attacks (i.e., static, dynamic) in real-time. First, several models are developed and tested utilizing the prepared dataset. Models evaluation is carried out using the DR, F-score, FAR, and MDR, which all showed an acceptable performance. Then, the optimum model is loaded to the onboard processor and tested for real-time detection and classification. Location-dependent applications, such as fixed-route public transportation, are expected to benefit from the methodology presented herein as the longitude, latitude, and altitude features are characterized in the implemented model.</p>
20

Assessing Viability of Open-Source Battery Cycling Data for Use in Data-Driven Battery Degradation Models

Ritesh Gautam (17582694) 08 December 2023 (has links)
<p dir="ltr">Lithium-ion batteries are being used increasingly more often to provide power for systems that range all the way from common cell-phones and laptops to advanced electric automotive and aircraft vehicles. However, as is the case for all battery types, lithium-ion batteries are prone to naturally occurring degradation phenomenon that limit their effective use in these systems to a finite amount of time. This degradation is caused by a plethora of variables and conditions including things like environmental conditions, physical stress/strain on the body of the battery cell, and charge/discharge parameters and cycling. Accurately and reliably being able to predict this degradation behavior in battery systems is crucial for any party looking to implement and use battery powered systems. However, due to the complicated non-linear multivariable processes that affect battery degradation, this can be difficult to achieve. Compared to traditional methods of battery degradation prediction and modeling like equivalent circuit models and physics-based electrochemical models, data-driven machine learning tools have been shown to be able to handle predicting and classifying the complex nature of battery degradation without requiring any prior knowledge of the physical systems they are describing.</p><p dir="ltr">One of the most critical steps in developing these data-driven neural network algorithms is data procurement and preprocessing. Without large amounts of high-quality data, no matter how advanced and accurate the architecture is designed, the neural network prediction tool will not be as effective as one trained on high quality, vast quantities of data. This work aims to gather battery degradation data from a wide variety of sources and studies, examine how the data was produced, test the effectiveness of the data in the Interfacial Multiphysics Laboratory’s autoencoder based neural network tool CD-Net, and analyze the results to determine factors that make battery degradation datasets perform better for use in machine learning/deep learning tools. This work also aims to relate this work to other data-driven models by comparing the CD-Net model’s performance with the publicly available BEEP’s (Battery Evaluation and Early Prediction) ElasticNet model. The reported accuracy and prediction models from the CD-Net and ElasticNet tools demonstrate that larger datasets with actively selected training/testing designations and less errors in the data produce much higher quality neural networks that are much more reliable in estimating the state-of-health of lithium-ion battery systems. The results also demonstrate that data-driven models are much less effective when trained using data from multiple different cell chemistries, form factors, and cycling conditions compared to more congruent datasets when attempting to create a generalized prediction model applicable to multiple forms of battery cells and applications.</p>

Page generated in 0.1 seconds