• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 117
  • 117
  • 78
  • 67
  • 57
  • 46
  • 44
  • 42
  • 42
  • 41
  • 40
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

AN ARTIFICIAL INTELLIGENCE APPROACH FOR RELIABLE AUTONOMOUS NAVIGATION IN GPS-DENIED ENVIRONMENTS WITH APPLICATIONS TO UNMMANED AERIAL VEHICLES

Mustafa MOHAMMAD S Alkhatib Sr (18496281) 03 May 2024 (has links)
<p dir="ltr">This Research focuses on developing artificial intelligence tools to detect and mitigate cyber-attacks targeting unmanned aerial vehicles. </p>
102

Analyzing Lower Limb Motion Capture with Smartphone : Possible improvements using machine learning / Analys av rörelsefångst för nedre extremiteterna med smartphone : Möjliga förbättringar med hjälp av maskininlärning

Brink, Anton January 2024 (has links)
Human motion analysis (HMA) can play a crucial role in sports and healthcare by providing unique insights on movement mechanics in the form of objective measurements and quantitative data. Traditional, state of the art, marker-based techniques, despite their accuracy, come with financial and logistical barriers, and are restricted to laboratory settings. Markerless systems offer much improved affordability and portability, and can potentially be used outside of laboratories. However, these advantages come with a significant cost in accuracy. This thesis attempts to address the challenge of democratizing HMA by leveraging recent advances in smartphone technology and machine learning.\newline\newlineThis thesis evaluates two modalities of performing markerless HMA: Single smartphone using Apple Arkit, and multiple smartphone setup using OpenCap, and compares both to a state of the art multiple-camera marker-based system from Vicon. Additionally, this thesis presents and evaluates two approaches to improving the single smartphone modality: Employing a Gaussian Process Model (GPR), and a Long-short-term-memory (LSTM) neural network to refine the single smartphone data to align with the marker-based result. Specific movements were recorded simultaneously with all three modalities on 13 subjects to build a dataset. From this, GPR and LSTM models were trained and applied to refine the single camera modality data. Lower limb joint angles, and joint centers were evaluated across the different modalities, and analyzed for potential use in real-world applications. While the findings of this thesis are promising, as both the GPR and LSTM models improve the accuracy of Apple Arkit, and OpenCap providing accurate and consistent results. It is important to acknowledge limitations regarding demographic diversity and how real-world environmental factors may influence its application. This thesis contributes to the efforts in narrowing the gap between marker-based HMA methods, and more accessible solutions. / Rörelseanalys av människokroppen (HMA) kan spela en betydelsefull roll i både idrott och hälso- och sjukvården. Genom objektiv och kvantitativ data ger den unik insikt i mekaniken bakom rörelser. Traditionella, toppmoderna, markör-baserade tekniker är mycket precisa, men medför finansiella och logistikbaserade barriärer, och finns endast tillgängliga i laboratorier. Markör-fria system erbjuder mycket bättre pris, portabilitet och kan potentiellt användas utanför laboratorier. Dessa fördelar går dock hand i hand med en betydande minskning av nogrannhet. Denna avhandling försöker ta itu med utmaningen att demokratisera HMA genom att utnyttja de senaste framstegen inom smartphoneteknik och maskininlärning. Denna avhandling utvärderar två sätt att utföra markör-fri HMA: Genom att använda en smartphone som kör Apple Arkit, och en uppsättning med flera smartphones som kör OpenCap. Båda modaliteter jämförs med ett markör-baserat system som använder flera kameror, från Vicon. Dessutom presenteras och utvärderas två metoder för att förbättra modaliteten med endast en smartphone: Användning av en Gaussisk Process modell för Regression (GPR) och ett Long-short-term-memory (LSTM) neuronnät för att förbättra data från en smartphone modalititeten, så att det bättre överenstämmer med det markör-baserade resultatet. Specifika rörelser spelades in samtidigt med alla tre modaliteter på 13 försökspersoner för att bygga upp ett dataset. Utifrån detta tränades GPR- och LSTM-modeller och användas för att förbättra data från en kamera modaliteten (Apple Arkit). Ledvinklar och ledcentra för de nedre extremiteterna utvärderades i de olika modaliteterna och analyserades för potentiell använding i verkliga tillämpningar. Även om resultaten av denna avhandling är lovande, då både GPR- och LSTM-modellerna förbättrar nogrannheten hos Apple Arkit, och OpenCap ger korrekta och konsekventa resultat, så är det viktigt att erkänna begränsningarna när det gäller demografisk mångfald och hur miljöfaktorer i verkligheten kan påverka tillämpningen.
103

How Certain Are You of Getting a Parking Space? : A deep learning approach to parking availability prediction / Maskininlärning för prognos av tillgängliga parkeringsplatser

Nilsson, Mathias, von Corswant, Sophie January 2020 (has links)
Traffic congestion is a severe problem in urban areas and it leads to the emission of greenhouse gases and air pollution. In general, drivers lack knowledge of the location and availability of free parking spaces in urban cities. This leads to people driving around searching for parking places, and about one-third of traffic congestion in cities is due to drivers searching for an available parking lot. In recent years, various solutions to provide parking information ahead have been proposed. The vast majority of these solutions have been applied in large cities, such as Beijing and San Francisco. This thesis has been conducted in collaboration with Knowit and Dukaten to predict parking occupancy in car parks one hour ahead in the relatively small city of Linköping. To make the predictions, this study has investigated the possibility to use long short-term memory and gradient boosting regression trees, trained on historical parking data. To enhance decision making, the predictive uncertainty was estimated using the novel approach Monte Carlo dropout for the former, and quantile regression for the latter. This study reveals that both of the models can predict parking occupancy ahead of time and they are found to excel in different contexts. The inclusion of exogenous features can improve prediction quality. More specifically, we found that incorporating hour of the day improved the models’ performances, while weather features did not contribute much. As for uncertainty, the employed method Monte Carlo dropout was shown to be sensitive to parameter tuning to obtain good uncertainty estimates.
104

Development of a Software Reliability Prediction Method for Onboard European Train Control System

Longrais, Guillaume Pierre January 2021 (has links)
Software prediction is a complex area as there are no accurate models to represent reliability throughout the use of software, unlike hardware reliability. In the context of the software reliability of on-board train systems, ensuring good software reliability over time is all the more critical given the current density of rail traffic and the risk of accidents resulting from a software malfunction. This thesis proposes to use soft computing methods and historical failure data to predict the software reliability of on-board train systems. For this purpose, four machine learning models (Multi-Layer Perceptron, Imperialist Competitive Algorithm Multi-Layer Perceptron, Long Short-Term Memory Network and Convolutional Neural Network) are compared to determine which has the best prediction performance. We also study the impact of having one or more features represented in the dataset used to train the models. The performance of the different models is evaluated using the Mean Absolute Error, Mean Squared Error, Root Mean Squared Error and the R Squared. The report shows that the Long Short-Term Memory Network is the best performing model on the data used for this project. It also shows that datasets with a single feature achieve better prediction. However, the small amount of data available to conduct the experiments in this project may have impacted the results obtained, which makes further investigations necessary. / Att förutsäga programvara är ett komplext område eftersom det inte finns några exakta modeller för att representera tillförlitligheten under hela programvaruanvändningen, till skillnad från hårdvarutillförlitlighet. När det gäller programvarans tillförlitlighet i fordonsbaserade tågsystem är det ännu viktigare att säkerställa en god tillförlitlighet över tiden med tanke på den nuvarande tätheten i järnvägstrafiken och risken för olyckor till följd av ett programvarufel. I den här avhandlingen föreslås att man använder mjuka beräkningsmetoder och historiska data om fel för att förutsäga programvarans tillförlitlighet i fordonsbaserade tågsystem. För detta ändamål jämförs fyra modeller för maskininlärning (Multi-Layer Perceptron, Imperialist Competitive Algorithm Mult-iLayer Perceptron, Long Short-Term Memory Network och Convolutional Neural Network) för att fastställa vilken som har den bästa förutsägelseprestandan. Vi undersöker också effekten av att ha en eller flera funktioner representerade i den datamängd som används för att träna modellerna. De olika modellernas prestanda utvärderas med hjälp av medelabsolut fel, medelkvadratfel, rotmedelkvadratfel och R-kvadrat. Rapporten visar att Long Short-Term Memory Network är den modell som ger bäst resultat på de data som använts för detta projekt. Den visar också att dataset med en enda funktion ger bättre förutsägelser. Den lilla mängd data som fanns tillgänglig för att genomföra experimenten i detta projekt kan dock ha påverkat de erhållna resultaten, vilket gör att ytterligare undersökningar är nödvändiga.
105

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
106

Futuristic Air Compressor System Design and Operation by Using Artificial Intelligence

Bahrami Asl, Babak 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The compressed air system is widely used throughout the industry. Air compressors are one of the most costly systems to operate in industrial plants in terms of energy consumption. Therefore, it becomes one of the primary targets when it comes to electrical energy and load management practices. Load forecasting is the first step in developing energy management systems both on the supply and user side. A comprehensive literature review has been conducted, and there was a need to study if predicting compressed air system’s load is a possibility. System’s load profile will be valuable to the industry practitioners as well as related software providers in developing better practice and tools for load management and look-ahead scheduling programs. Feed forward neural networks (FFNN) and long short-term memory (LSTM) techniques have been used to perform 15 minutes ahead prediction. Three cases of different sizes and control methods have been studied. The results proved the possibility of the forecast. In this study two control methods have been developed by using the prediction. The first control method is designed for variable speed driven air compressors. The goal was to decrease the maximum electrical load for the air compressor by using the system's full operational capabilities and the air receiver tank. This goal has been achieved by optimizing the system operation and developing a practical control method. The results can be used to decrease the maximum electrical load consumed by the system as well as assuring the sufficient air for the users during the peak compressed air demand by users. This method can also prevent backup or secondary systems from running during the peak compressed air demand which can result in more energy and demand savings. Load management plays a pivotal role and developing maximum load reduction methods by users can result in more sustainability as well as the cost reduction for developing sustainable energy production sources. The last part of this research is concentrated on reducing the energy consumed by load/unload controlled air compressors. Two novel control methods have been introduced. One method uses the prediction as input, and the other one doesn't require prediction. Both of them resulted in energy consumption reduction by increasing the off period with the same compressed air output or in other words without sacrificing the required compressed air needed for production. / 2019-12-05
107

Outlier detection with ensembled LSTM auto-encoders on PCA transformed financial data / Avvikelse-detektering med ensemble LSTM auto-encoders på PCA-transformerad finansiell data

Stark, Love January 2021 (has links)
Financial institutions today generate a large amount of data, data that can contain interesting information to investigate to further the economic growth of said institution. There exists an interest in analyzing these points of information, especially if they are anomalous from the normal day-to-day work. However, to find these outliers is not an easy task and not possible to do manually due to the massive amounts of data being generated daily. Previous work to solve this has explored the usage of machine learning to find outliers in these financial datasets. Previous studies have shown that the pre-processing of data usually stands for a big part in information loss. This work aims to study if there is a proper balance in how the pre-processing is carried out to retain the highest amount of information while simultaneously not letting the data remain too complex for the machine learning models. The dataset used consisted of Foreign exchange transactions supplied by the host company and was pre-processed through the use of Principal Component Analysis (PCA). The main purpose of this work is to test if an ensemble of Long Short-Term Memory Recurrent Neural Networks (LSTM), configured as autoencoders, can be used to detect outliers in the data and if the ensemble is more accurate than a single LSTM autoencoder. Previous studies have shown that Ensemble autoencoders can prove more accurate than a single autoencoder, especially when SkipCells have been implemented (a configuration that skips over LSTM cells to make the model perform with more variation). A datapoint will be considered an outlier if the LSTM model has trouble properly recreating it, i.e. a pattern that is hard to classify, making it available for further investigations done manually. The results show that the ensembled LSTM model proved to be more accurate than that of a single LSTM model in regards to reconstructing the dataset, and by our definition of an outlier, more accurate in outlier detection. The results from the pre-processing experiments reveal different methods of obtaining an optimal number of components for your data. One of those is by studying retained variance and accuracy of PCA transformation compared to model performance for a certain number of components. One of the conclusions from the work is that ensembled LSTM networks can prove very powerful, but that alternatives to pre-processing should be explored such as categorical embedding instead of PCA. / Finansinstitut genererar idag en stor mängd data, data som kan innehålla intressant information värd att undersöka för att främja den ekonomiska tillväxten för nämnda institution. Det finns ett intresse för att analysera dessa informationspunkter, särskilt om de är avvikande från det normala dagliga arbetet. Att upptäcka dessa avvikelser är dock inte en lätt uppgift och ej möjligt att göra manuellt på grund av de stora mängderna data som genereras dagligen. Tidigare arbete för att lösa detta har undersökt användningen av maskininlärning för att upptäcka avvikelser i finansiell data. Tidigare studier har visat på att förbehandlingen av datan vanligtvis står för en stor del i förlust av emphinformation från datan. Detta arbete syftar till att studera om det finns en korrekt balans i hur förbehandlingen utförs för att behålla den högsta mängden information samtidigt som datan inte förblir för komplex för maskininlärnings-modellerna. Det emphdataset som användes bestod av valutatransaktioner som tillhandahölls av värdföretaget och förbehandlades genom användning av Principal Component Analysis (PCA). Huvudsyftet med detta arbete är att undersöka om en ensemble av Long Short-Term Memory Recurrent Neural Networks (LSTM), konfigurerad som autoenkodare, kan användas för att upptäcka avvikelser i data och om ensemblen är mer precis i sina predikteringar än en ensam LSTM-autoenkodare. Tidigare studier har visat att en ensembel avautoenkodare kan visa sig vara mer precisa än en singel autokodare, särskilt när SkipCells har implementerats (en konfiguration som hoppar över vissa av LSTM-cellerna för att göra modellerna mer varierade). En datapunkt kommer att betraktas som en avvikelse om LSTM-modellen har problem med att återskapa den väl, dvs ett mönster som nätverket har svårt att återskapa, vilket gör datapunkten tillgänglig för vidare undersökningar. Resultaten visar att en ensemble av LSTM-modeller predikterade mer precist än en singel LSTM-modell när det gäller att återskapa datasetet, och då enligt vår definition av avvikelser, mer precis avvikelse detektering. Resultaten från förbehandlingen visar olika metoder för att uppnå ett optimalt antal komponenter för dina data genom att studera bibehållen varians och precision för PCA-transformation jämfört med modellprestanda. En av slutsatserna från arbetet är att en ensembel av LSTM-nätverk kan visa sig vara mycket kraftfulla, men att alternativ till förbehandling bör undersökas, såsom categorical embedding istället för PCA.
108

Predictive vertical CPU autoscaling in Kubernetes based on time-series forecasting with Holt-Winters exponential smoothing and long short-term memory / Prediktiv vertikal CPU-autoskalning i Kubernetes baserat på tidsserieprediktion med Holt-Winters exponentiell utjämning och långt korttidsminne

Wang, Thomas January 2021 (has links)
Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application’s run-time requirements, but only help the cloud infrastructure resource manager to map requested virtual resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. Consequently, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. In this research, we formulated two new predictive CPU autoscaling strategies forKubernetes containerized applications, using time-series analysis, based on Holt-Winters exponential smoothing and long short-term memory (LSTM) artificial recurrent neural networks. The two approaches were analyzed, and their performances were compared to that of the default Kubernetes Vertical Pod Autoscaler (VPA). Efficiency was evaluated in terms of CPU resource wastage, and insufficient CPU percentage and amount for container workloads from Alibaba Cluster Trace 2018, and others. In our experiments, we observed that Kubernetes Vertical Pod Autoscaler (VPA) tended to perform poorly on workloads that periodically change. Our results showed that compared to VPA, predictive methods based on Holt- Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU wastage by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions. / Privata och offentliga moln kräver att användare begär mängden CPU och minne (RAM) som ska fördelas till sina applikationer. Mängden resurser är inte nödvändigtvis relaterat till applikationernas körtidskrav, utan är till för att molninfrastrukturresurshanteraren ska kunna kartlägga begärda virtuella resurser till fysiska resurser. Om en applikation överskrider dessa värden kan den saktas ner eller till och med krascha. För att undvika störningar överskattas begärda värden oftast, vilket kan resultera i ineffektiv resursutnyttjande i molninfrastrukturen. Autoskalning är en teknik som används för att överkomma dessa problem. I denna forskning formulerade vi två nya prediktiva CPU autoskalningsstrategier för containeriserade applikationer i Kubernetes, med hjälp av tidsserieanalys baserad på metoderna Holt-Winters exponentiell utjämning och långt korttidsminne (LSTM) återkommande neurala nätverk. De två metoderna analyserades, och deras prestationer jämfördes med Kubernetes Vertical Pod Autoscaler (VPA). Prestation utvärderades genom att observera under- och överutilisering av CPU-resurser, för diverse containerarbetsbelastningar från bl. a. Alibaba Cluster Trace 2018. Vi observerade att Kubernetes Vertical Pod Autoscaler (VPA) i våra experiment tenderade att prestera dåligt på arbetsbelastningar som förändras periodvist. Våra resultat visar att jämfört med VPA kan prediktiva metoder baserade på Holt-Winters exponentiell utjämning (HW) och långt korttidsminne (LSTM) minska överflödig CPU-användning med över 40 % samtidigt som de undviker CPU-brist för olika arbetsbelastningar. Ytterligare visade sig LSTM generera stabilare prediktioner jämfört med HW, vilket ledde till mer robusta autoskalningsbeslut.
109

Battery Capacity Prediction Using Deep Learning : Estimating battery capacity using cycling data and deep learning methods

Rojas Vazquez, Josefin January 2023 (has links)
The growing urgency of climate change has led to growth in the electrification technology field, where batteries have emerged as an essential role in the renewable energy transition, supporting the implementation of environmentally friendly technologies such as smart grids, energy storage systems, and electric vehicles. Battery cell degradation is a common occurrence indicating battery usage. Optimizing lithium-ion battery degradation during operation benefits the prediction of future degradation, minimizing the degradation mechanisms that result in power fade and capacity fade. This degree project aims to investigate battery degradation prediction based on capacity using deep learning methods. Through analysis of battery degradation and health prediction for lithium-ion cells using non-destructive techniques. Such as electrochemical impedance spectroscopy obtaining ECM and three different deep learning models using multi-channel data. Additionally, the AI models were designed and developed using multi-channel data and evaluated performance within MATLAB. The results reveal an increased resistance from EIS measurements as an indicator of ongoing battery aging processes such as loss o active materials, solid-electrolyte interphase thickening, and lithium plating. The AI models demonstrate accurate capacity estimation, with the LSTM model revealing exceptional performance based on the model evaluation with RMSE. These findings highlight the importance of carefully managing battery charging processes and considering factors contributing to degradation. Understanding degradation mechanisms enables the development of strategies to mitigate aging processes and extend battery lifespan, ultimately leading to improved performance.
110

LEVERAGING MACHINE LEARNING FOR ENHANCED SATELLITE TRACKING TO BOLSTER SPACE DOMAIN AWARENESS

Charles William Grey (16413678) 23 June 2023 (has links)
<p>Our modern society is more dependent on its assets in space now more than ever. For<br> example, the Global Positioning System (GPS) many rely on for navigation uses data from a<br> 24-satellite constellation. Additionally, our current infrastructure for gas pumps, cell phones,<br> ATMs, traffic lights, weather data, etc. all depend on satellite data from various constel-<br> lations. As a result, it is increasingly necessary to accurately track and predict the space<br> domain. In this thesis, after discussing how space object tracking and object position pre-<br> diction is currently being done, I propose a machine learning-based approach to improving<br> the space object position prediction over the standard SGP4 method, which is limited in<br> prediction accuracy time to about 24 hours. Using this approach, we are able to show that<br> meaningful improvements over the standard SGP4 model can be achieved using a machine<br> learning model built based on a type of recurrent neural network called a long short term<br> memory model (LSTM). I also provide distance predictions for 4 different space objects over<br> time frames of 15 and 30 days. Future work in this area is likely to include extending and<br> validating this approach on additional satellites to construct a more general model, testing a<br> wider range of models to determine limits on accuracy across a broad range of time horizons,<br> and proposing similar methods less dependent on antiquated data formats like the TLE.</p>

Page generated in 0.0558 seconds