• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 352
  • 23
  • 19
  • 17
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 6
  • 3
  • 3
  • 3
  • Tagged with
  • 537
  • 537
  • 537
  • 147
  • 117
  • 109
  • 90
  • 68
  • 60
  • 59
  • 56
  • 56
  • 55
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
522

Futuristic Air Compressor System Design and Operation by Using Artificial Intelligence

Bahrami Asl, Babak 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The compressed air system is widely used throughout the industry. Air compressors are one of the most costly systems to operate in industrial plants in terms of energy consumption. Therefore, it becomes one of the primary targets when it comes to electrical energy and load management practices. Load forecasting is the first step in developing energy management systems both on the supply and user side. A comprehensive literature review has been conducted, and there was a need to study if predicting compressed air system’s load is a possibility. System’s load profile will be valuable to the industry practitioners as well as related software providers in developing better practice and tools for load management and look-ahead scheduling programs. Feed forward neural networks (FFNN) and long short-term memory (LSTM) techniques have been used to perform 15 minutes ahead prediction. Three cases of different sizes and control methods have been studied. The results proved the possibility of the forecast. In this study two control methods have been developed by using the prediction. The first control method is designed for variable speed driven air compressors. The goal was to decrease the maximum electrical load for the air compressor by using the system's full operational capabilities and the air receiver tank. This goal has been achieved by optimizing the system operation and developing a practical control method. The results can be used to decrease the maximum electrical load consumed by the system as well as assuring the sufficient air for the users during the peak compressed air demand by users. This method can also prevent backup or secondary systems from running during the peak compressed air demand which can result in more energy and demand savings. Load management plays a pivotal role and developing maximum load reduction methods by users can result in more sustainability as well as the cost reduction for developing sustainable energy production sources. The last part of this research is concentrated on reducing the energy consumed by load/unload controlled air compressors. Two novel control methods have been introduced. One method uses the prediction as input, and the other one doesn't require prediction. Both of them resulted in energy consumption reduction by increasing the off period with the same compressed air output or in other words without sacrificing the required compressed air needed for production. / 2019-12-05
523

Predictive vertical CPU autoscaling in Kubernetes based on time-series forecasting with Holt-Winters exponential smoothing and long short-term memory / Prediktiv vertikal CPU-autoskalning i Kubernetes baserat på tidsserieprediktion med Holt-Winters exponentiell utjämning och långt korttidsminne

Wang, Thomas January 2021 (has links)
Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application’s run-time requirements, but only help the cloud infrastructure resource manager to map requested virtual resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. Consequently, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. In this research, we formulated two new predictive CPU autoscaling strategies forKubernetes containerized applications, using time-series analysis, based on Holt-Winters exponential smoothing and long short-term memory (LSTM) artificial recurrent neural networks. The two approaches were analyzed, and their performances were compared to that of the default Kubernetes Vertical Pod Autoscaler (VPA). Efficiency was evaluated in terms of CPU resource wastage, and insufficient CPU percentage and amount for container workloads from Alibaba Cluster Trace 2018, and others. In our experiments, we observed that Kubernetes Vertical Pod Autoscaler (VPA) tended to perform poorly on workloads that periodically change. Our results showed that compared to VPA, predictive methods based on Holt- Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU wastage by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions. / Privata och offentliga moln kräver att användare begär mängden CPU och minne (RAM) som ska fördelas till sina applikationer. Mängden resurser är inte nödvändigtvis relaterat till applikationernas körtidskrav, utan är till för att molninfrastrukturresurshanteraren ska kunna kartlägga begärda virtuella resurser till fysiska resurser. Om en applikation överskrider dessa värden kan den saktas ner eller till och med krascha. För att undvika störningar överskattas begärda värden oftast, vilket kan resultera i ineffektiv resursutnyttjande i molninfrastrukturen. Autoskalning är en teknik som används för att överkomma dessa problem. I denna forskning formulerade vi två nya prediktiva CPU autoskalningsstrategier för containeriserade applikationer i Kubernetes, med hjälp av tidsserieanalys baserad på metoderna Holt-Winters exponentiell utjämning och långt korttidsminne (LSTM) återkommande neurala nätverk. De två metoderna analyserades, och deras prestationer jämfördes med Kubernetes Vertical Pod Autoscaler (VPA). Prestation utvärderades genom att observera under- och överutilisering av CPU-resurser, för diverse containerarbetsbelastningar från bl. a. Alibaba Cluster Trace 2018. Vi observerade att Kubernetes Vertical Pod Autoscaler (VPA) i våra experiment tenderade att prestera dåligt på arbetsbelastningar som förändras periodvist. Våra resultat visar att jämfört med VPA kan prediktiva metoder baserade på Holt-Winters exponentiell utjämning (HW) och långt korttidsminne (LSTM) minska överflödig CPU-användning med över 40 % samtidigt som de undviker CPU-brist för olika arbetsbelastningar. Ytterligare visade sig LSTM generera stabilare prediktioner jämfört med HW, vilket ledde till mer robusta autoskalningsbeslut.
524

Battery Capacity Prediction Using Deep Learning : Estimating battery capacity using cycling data and deep learning methods

Rojas Vazquez, Josefin January 2023 (has links)
The growing urgency of climate change has led to growth in the electrification technology field, where batteries have emerged as an essential role in the renewable energy transition, supporting the implementation of environmentally friendly technologies such as smart grids, energy storage systems, and electric vehicles. Battery cell degradation is a common occurrence indicating battery usage. Optimizing lithium-ion battery degradation during operation benefits the prediction of future degradation, minimizing the degradation mechanisms that result in power fade and capacity fade. This degree project aims to investigate battery degradation prediction based on capacity using deep learning methods. Through analysis of battery degradation and health prediction for lithium-ion cells using non-destructive techniques. Such as electrochemical impedance spectroscopy obtaining ECM and three different deep learning models using multi-channel data. Additionally, the AI models were designed and developed using multi-channel data and evaluated performance within MATLAB. The results reveal an increased resistance from EIS measurements as an indicator of ongoing battery aging processes such as loss o active materials, solid-electrolyte interphase thickening, and lithium plating. The AI models demonstrate accurate capacity estimation, with the LSTM model revealing exceptional performance based on the model evaluation with RMSE. These findings highlight the importance of carefully managing battery charging processes and considering factors contributing to degradation. Understanding degradation mechanisms enables the development of strategies to mitigate aging processes and extend battery lifespan, ultimately leading to improved performance.
525

LEVERAGING MACHINE LEARNING FOR ENHANCED SATELLITE TRACKING TO BOLSTER SPACE DOMAIN AWARENESS

Charles William Grey (16413678) 23 June 2023 (has links)
<p>Our modern society is more dependent on its assets in space now more than ever. For<br> example, the Global Positioning System (GPS) many rely on for navigation uses data from a<br> 24-satellite constellation. Additionally, our current infrastructure for gas pumps, cell phones,<br> ATMs, traffic lights, weather data, etc. all depend on satellite data from various constel-<br> lations. As a result, it is increasingly necessary to accurately track and predict the space<br> domain. In this thesis, after discussing how space object tracking and object position pre-<br> diction is currently being done, I propose a machine learning-based approach to improving<br> the space object position prediction over the standard SGP4 method, which is limited in<br> prediction accuracy time to about 24 hours. Using this approach, we are able to show that<br> meaningful improvements over the standard SGP4 model can be achieved using a machine<br> learning model built based on a type of recurrent neural network called a long short term<br> memory model (LSTM). I also provide distance predictions for 4 different space objects over<br> time frames of 15 and 30 days. Future work in this area is likely to include extending and<br> validating this approach on additional satellites to construct a more general model, testing a<br> wider range of models to determine limits on accuracy across a broad range of time horizons,<br> and proposing similar methods less dependent on antiquated data formats like the TLE.</p>
526

Prediction of Protein-Protein Interactions Using Deep Learning Techniques

Soleymani, Farzan 24 April 2023 (has links)
Proteins are considered the primary actors in living organisms. Proteins mainly perform their functions by interacting with other proteins. Protein-protein interactions underpin various biological activities such as metabolic cycles, signal transduction, and immune response. PPI identification has been addressed by various experimental methods such as the yeast two-hybrid, mass spectrometry, and protein microarrays, to mention a few. However, due to the sheer number of proteins, experimental methods for finding interacting and non-interacting protein pairs are time-consuming and costly. Therefore a sequence-based framework called ProtInteract is developed to predict protein-protein interaction. ProtInteract comprises two components: first, a novel autoencoder architecture that encodes each protein's primary structure to a lower-dimensional vector while preserving its underlying sequential pattern by extracting uncorrelated attributes and more expressive descriptors. This leads to faster training of the second network, a deep convolutional neural network (CNN) that receives encoded proteins and predicts their interaction. Three different scenarios formulate the prediction task. In each scenario, the deep CNN predicts the class of a given encoded protein pair. Each class indicates different ranges of confidence scores corresponding to the probability of whether a predicted interaction occurs or not. The proposed framework features significantly low computational complexity and relatively fast response. The present study makes two significant contributions to the field of protein-protein interaction (PPI) prediction. Firstly, it addresses the computational challenges posed by the high dimensionality of protein datasets through the use of dimensionality reduction techniques, which extract highly informative sequence attributes. Secondly, the proposed framework, ProtInteract, utilises this information to identify the interaction characteristics of a protein based on its amino acid configuration. ProtInteract encodes the protein's primary structure into a lower-dimensional vector space, thereby reducing the computational complexity of PPI prediction. Our results provide evidence of the proposed framework's accuracy and efficiency in predicting protein-protein interactions.
527

The impact of parsing methods on recurrent neural networks applied to event-based vehicular signal data / Påverkan av parsningsmetoder på återkommande neuronnät applicerade på händelsebaserad signaldata från fordon

Max, Lindblad January 2018 (has links)
This thesis examines two different approaches to parsing event-based vehicular signal data to produce input to a neural network prediction model: event parsing, where the data is kept unevenly spaced over the temporal domain, and slice parsing, where the data is made to be evenly spaced over the temporal domain instead. The dataset used as a basis for these experiments consists of a number of vehicular signal logs taken at Scania AB. Comparisons between the parsing methods have been made by first training long short-term memory (LSTM) recurrent neural networks (RNN) on each of the parsed datasets and then measuring the output error and resource costs of each such model after having validated them on a number of shared validation sets. The results from these tests clearly show that slice parsing compares favourably to event parsing. / Denna avhandling jämför två olika tillvägagångssätt vad gäller parsningen av händelsebaserad signaldata från fordon för att producera indata till en förutsägelsemodell i form av ett neuronnät, nämligen händelseparsning, där datan förblir ojämnt fördelad över tidsdomänen, och skivparsning, där datan är omgjord till att istället vara jämnt fördelad över tidsdomänen. Det dataset som används för dessa experiment är ett antal signalloggar från fordon som kommer från Scania. Jämförelser mellan parsningsmetoderna gjordes genom att först träna ett lång korttidsminne (LSTM) återkommande neuronnät (RNN) på vardera av de skapade dataseten för att sedan mäta utmatningsfelet och resurskostnader för varje modell efter att de validerats på en delad uppsättning av valideringsdata. Resultaten från dessa tester visar tydligt på att skivparsning står sig väl mot händelseparsning.
528

Beyond standard assumptions on neural excitability / when channels cooperate or capacitance varies

Pfeiffer, Paul Elias 24 August 2023 (has links)
Die elektrische Signalverarbeitung in Nervenzellen basiert auf deren erregbarer Zellmembran. Üblicherweise wird angenommen, dass die in der Membran eingebetteten leitfähigen Ionenkanäle nicht auf direkte Art gekoppelt sind und dass die Kapazität des von der Membran gebildeten Kondensators konstant ist. Allerdings scheinen diese Annahmen nicht für alle Nervenzellen zu gelten. Im Gegenteil, verschiedene Ionenkanäle “kooperieren” und auch die Vorstellung von einer konstanten spezifischen Membrankapazität wurde kürzlich in Frage gestellt. Die Auswirkungen dieser Abweichungen auf die elektrischen Eigenschaften von Nervenzellen ist das Thema der folgenden kumulativen Dissertationsschrift. Im ersten Projekt wird gezeigt, auf welche Weise stark kooperative spannungsabhängige Ionenkanäle eine Form von zellulärem Kurzzeitspeicher für elektrische Aktivität bilden könnten. Solche kooperativen Kanäle treten in der Membran häufig in kleinen räumlich getrennte Clustern auf. Basierend auf einem mathematischen Modell wird nachgewiesen, dass solche Kanalcluster als eine bistabile Leitfähigkeit agieren. Die dadurch entstehende große Speicherkapazität eines Ensembles dieser Kanalcluster könnte von Nervenzellen für stufenloses persistentes Feuern genutzt werden -- ein Feuerverhalten von Nutzen für das Kurzzeichgedächtnis. Im zweiten Projekt wird ein neues Dynamic Clamp Protokoll entwickelt, der Capacitance Clamp, das erlaubt, Änderungen der Membrankapazität in biologischen Nervenzellen zu emulieren. Eine solche experimentelle Möglichkeit, um systematisch die Rolle der Kapazität zu untersuchen, gab es bisher nicht. Nach einer Reihe von Tests in Simulationen und Experimenten wurde die Technik mit Körnerzellen des *Gyrus dentatus* genutzt, um den Einfluss von Kapazität auf deren Feuerverhalten zu studieren. Die Kombination beider Projekte zeigt die Relevanz dieser oft vernachlässigten Facetten von neuronalen Membranen für die Signalverarbeitung in Nervenzellen. / Electrical signaling in neurons is shaped by their specialized excitable cell membranes. Commonly, it is assumed that the ion channels embedded in the membrane gate independently and that the electrical capacitance of neurons is constant. However, not all excitable membranes appear to adhere to these assumptions. On the contrary, ion channels are observed to gate cooperatively in several circumstances and also the notion of one fixed value for the specific membrane capacitance (per unit area) across neuronal membranes has been challenged recently. How these deviations from the original form of conductance-based neuron models affect their electrical properties has not been extensively explored and is the focus of this cumulative thesis. In the first project, strongly cooperative voltage-gated ion channels are proposed to provide a membrane potential-based mechanism for cellular short-term memory. Based on a mathematical model of cooperative gating, it is shown that coupled channels assembled into small clusters act as an ensemble of bistable conductances. The correspondingly large memory capacity of such an ensemble yields an alternative explanation for graded forms of cell-autonomous persistent firing – an observed firing mode implicated in working memory. In the second project, a novel dynamic clamp protocol -- the capacitance clamp -- is developed to artificially modify capacitance in biological neurons. Experimental means to systematically investigate capacitance, a basic parameter shared by all excitable cells, had previously been missing. The technique, thoroughly tested in simulations and experiments, is used to monitor how capacitance affects temporal integration and energetic costs of spiking in dentate gyrus granule cells. Combined, the projects identify computationally relevant consequences of these often neglected facets of neuronal membranes and extend the modeling and experimental techniques to further study them.
529

Machine Learning for Spacecraft Time-Series Anomaly Detection and Plant Phenotyping

Sriram Baireddy (17428602) 01 December 2023 (has links)
<p dir="ltr">Detecting anomalies in spacecraft time-series data is a high priority, especially considering the harshness of the spacecraft operating environment. These anomalies often function as precursors for system failure. Traditionally, the time-series data channels are monitored manually by domain experts, which is time-consuming. Additionally, there are thousands of channels to monitor. Machine learning methods have proven to be useful for automatic anomaly detection, but a unique model must be trained from scratch for each time-series. This thesis proposes three approaches for reducing training costs. First, a transfer learning approach that finetunes a general pre-trained model to reduce training time and the number of unique models required for a given spacecraft. The second and third approaches both use online learning to reduce the amount of training data and time needed to identify anomalies. The second approach leverages an ensemble of extreme learning machines while the third approach uses deep learning models. All three approaches are shown to achieve reasonable anomaly detection performance with reduced training costs.</p><p dir="ltr">Measuring the phenotypes, or observable traits, of a plant enables plant scientists to understand the interaction between the growing environment and the genetic characteristics of a plant. Plant phenotyping is typically done manually, and often involves destructive sampling, making the entire process labor-intensive and difficult to replicate. In this thesis, we use image processing for characterizing two different disease progressions. Tar spot disease can be identified visually as it induces small black circular spots on the leaf surface. We propose using a Mask R-CNN to detect tar spots from RGB images of leaves, thus enabling rapid non-destructive phenotyping of afflicted plants. The second disease, bacteria-induced wilting, is measured using a visual assessment that is often subjective. We design several metrics that can be extracted from RGB images that can be used to generate consistent wilting measurements with a random forest. Both approaches ensure faster, replicable results, enabling accurate, high-throughput analysis to draw conclusions about effective disease treatments and plant breeds.</p>
530

Predicting stock market trends using time-series classification with dynamic neural networks

Mocanu, Remus 09 1900 (has links)
L’objectif de cette recherche était d’évaluer l’efficacité du paramètre de classification pour prédire suivre les tendances boursières. Les méthodes traditionnelles basées sur la prévision, qui ciblent l’immédiat pas de temps suivant, rencontrent souvent des défis dus à des données non stationnaires, compromettant le modèle précision et stabilité. En revanche, notre approche de classification prédit une évolution plus large du cours des actions avec des mouvements sur plusieurs pas de temps, visant à réduire la non-stationnarité des données. Notre ensemble de données, dérivé de diverses actions du NASDAQ-100 et éclairé par plusieurs indicateurs techniques, a utilisé un mélange d'experts composé d'un mécanisme de déclenchement souple et d'une architecture basée sur les transformateurs. Bien que la méthode principale de cette expérience ne se soit pas révélée être aussi réussie que nous l'avions espéré et vu initialement, la méthodologie avait la capacité de dépasser toutes les lignes de base en termes de performance dans certains cas à quelques époques, en démontrant le niveau le plus bas taux de fausses découvertes tout en ayant un taux de rappel acceptable qui n'est pas zéro. Compte tenu de ces résultats, notre approche encourage non seulement la poursuite des recherches dans cette direction, dans lesquelles un ajustement plus précis du modèle peut être mis en œuvre, mais offre également aux personnes qui investissent avec l'aide de l'apprenstissage automatique un outil différent pour prédire les tendances boursières, en utilisant un cadre de classification et un problème défini différemment de la norme. Il est toutefois important de noter que notre étude est basée sur les données du NASDAQ-100, ce qui limite notre l’applicabilité immédiate du modèle à d’autres marchés boursiers ou à des conditions économiques variables. Les recherches futures pourraient améliorer la performance en intégrant les fondamentaux des entreprises et effectuer une analyse du sentiment sur l'actualité liée aux actions, car notre travail actuel considère uniquement indicateurs techniques et caractéristiques numériques spécifiques aux actions. / The objective of this research was to evaluate the classification setting's efficacy in predicting stock market trends. Traditional forecasting-based methods, which target the immediate next time step, often encounter challenges due to non-stationary data, compromising model accuracy and stability. In contrast, our classification approach predicts broader stock price movements over multiple time steps, aiming to reduce data non-stationarity. Our dataset, derived from various NASDAQ-100 stocks and informed by multiple technical indicators, utilized a Mixture of Experts composed of a soft gating mechanism and a transformer-based architecture. Although the main method of this experiment did not prove to be as successful as we had hoped and seen initially, the methodology had the capability in surpassing all baselines in certain instances at a few epochs, demonstrating the lowest false discovery rate while still having an acceptable recall rate. Given these results, our approach not only encourages further research in this direction, in which further fine-tuning of the model can be implemented, but also offers traders a different tool for predicting stock market trends, using a classification setting and a differently defined problem. It's important to note, however, that our study is based on NASDAQ-100 data, limiting our model's immediate applicability to other stock markets or varying economic conditions. Future research could enhance performance by integrating company fundamentals and conducting sentiment analysis on stock-related news, as our current work solely considers technical indicators and stock-specific numerical features.

Page generated in 0.0705 seconds