• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 246
  • 16
  • 6
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 321
  • 177
  • 154
  • 129
  • 116
  • 111
  • 89
  • 84
  • 77
  • 77
  • 65
  • 60
  • 59
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

On-board processing with AI for more autonomous and capable satellite systems

Lund, Tamina January 2022 (has links)
While the use of Artificial Intelligence (AI) has faced a sharp up-rise in popularity in ground-based industries, such as for autonomous navigation in the automotive industry and predictive maintenance in manufacturing processes, it is yet only rarely used in space industry. Hence, this thesis aims to investigate the possibilities of using AI for processing on-board Earth-orbiting satellites while in orbit. In a first step, the interests and trends of deploying AI on-board satellites are studied, followed by challenges that are hindering the progression of its development. In a second step, five potential on-board applications are selected for investigation of their overall relevance to space industry, as well as their benefits compared to traditional approaches. Out of these, the possibility of using AI for predicting the degradation of batteries is selected for further study, as it shows the highest potential. Today’s approaches for monitoring battery degradation on satellites are highly insufficient and there is a great demand for a new approach. Several AI-based methods have been proposed in literature, but only rarely for processing directly on-board. Thus, I investigate the feasibility of adopting such an algorithm for on-board use, including an evaluation of the suitability of different algorithms, as well as the choice of input parameters and training data. I find that the use of AI could highly improve various aspects of satellite performance both on a platform and a payload level, by making them more efficient, but also more capable, such as for in-orbit battery prediction on-board. However, its implementation is still heavily hampered by the lack of validation and verification standards for AI in space, along with limitations imposed by the space environment, restricting the satellite design. In the investigation of using AI for on-board battery prediction, I find that this would be a suitable application for constellation satellites in LEO, in particular for prolonging their operations beyond their planned lifetime while still being able to ensure safe decommissioning. I estimate that this would lead to a yearly minimal average saved satellite replacement cost of $ 22 million in a constellation with 500 satellites, assuming an extension of the satellite lifetime from 7 to 7.5 years when using this application. Based on references in literature, I find that using a Long Short-Term Memory (LSTM) algorithm could make the most intricate predictions, whereas a Gated Recurrent Unit (GRU) algorithm would be less processing-heavy at the cost of a loss in accuracy. Training needs to be done on ground, either on telemetry data from past, similar missions or on synthetic data from simulations. Its implementation needs to be investigated in future research, including the selection of a suitable framework, but also benchmarking for evaluating the necessary processing power and memory space.
152

LSTM Feature Engineering Through Time Series Similarity Embedding / Aspektkonstruktion för LSTM-nätverk genom inbäddning av tidsserielikheter

Bångerius, Sebastian January 2022 (has links)
Time series prediction has many applications. In cases with simultaneous series (like measurements of weather from multiple stations, or multiple stocks on the stock market)it is not unlikely that these series from different measurement origins behave similarly, or respond to the same contextual signals. Training input to a prediction model could be constructed from all simultaneous measurements to try and capture the relations between the measurement origins. A generalized approach is to train a prediction model on samples from any individual measurement origin. The data mass is the same in both cases, but in the first case, fewer samples of a larger width are used, while the second option uses a higher number of smaller samples. The first, high-width option, risks over-fitting as a result of fewer training samples per input variable. The second, general option, would have no way to learn relations between the measurement origins. Amending the general model with contextual information would allow for keeping a high samples-per-variable ratio without losing the ability to take the origin of the measurements into account. This thesis presents a vector embedding method for measurement origins in an environment with shared response to contextual signals. The embeddings are based on multi-variate time series from the origins. The embedding method is inspired by co-occurrence matrices commonly used in Natural Language Processing. The similarity measures used between the series are Dynamic Time Warping (DTW), Step-wise Euclidean Distance, and Pearson Correlation. The dimensionality of the resulting embeddings is reduced by Principal Component Analysis (PCA) to increase information density, and effectively preserve variance in the similarity space. The created embedding system allows contextualization of samples, akin to the human intuition that comes from knowing where measurements were taken from, like knowing what sort of company a stock ticker represents, or what environment a weather station is located in. In the embedded space, embeddings of series from fundamentally similar measurement origins are closely located, so that information regarding the behavior of one can be generalized to its neighbors. The resulting embeddings from this work resonate well with existing clustering methods in a weather dataset, and partially in a financial dataset, and do provide performance improvement for an LSTM network acting on said financial dataset. The similarity embeddings also outperform an embedding layer trained together with the LSTM.
153

A study of forecasts in Financial Time Series using Machine Learning methods

Asokan, Mowniesh January 2022 (has links)
Forecasting financial time series is one of the most challenging problems in economics and business. Markets are highly complex due to non-linear factors in data and uncertainty. It moves up and down without any pattern. Based on historical univariate close prices from the S\&P 500, SSE, and FTSE 100 indexes, this thesis forecasts future values using two different approaches: one using a classical method, a Seasonal ARIMA model, and a hybrid ARIMA-GARCH model, while the other uses an LSTM neural network. Each method is used to perform at different forecast horizons. Experimental results have proven that the LSTM and Hybrid ARIMA-GARCH model performs better than the SARIMA model. To measure the model performance we used the Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE).
154

Sensor modelling for anomaly detection in time series data

JALIL POUR, ZAHRA January 2022 (has links)
Mechanical devices in industriy are equipped with numerous sensors to capture thehealth state of the machines. The reliability of the machine’s health system depends on thequality of sensor data. In order to predict the health state of sensors, abnormal behaviourof sensors must be detected to avoid unnecessary cost.We proposed LSTM autoencoder in which the objective is to reconstruct input time seriesand predict the next time instance based on historical data, and we evaluate anomaliesin multivariate time series via reconstructed error. We also used exponential moving averageas a preprocessing step to smooth the trend of time series to remove high frequencynoise and low frequency deviation in multivariate time series data.Our experiment results, based on different datasets of multivariate time series of gasturbines, demonstrate that the proposed model works well for injected anomalies and realworld data to detect the anomaly. The accuracy of the model under 5 percent infectedanomalies is 98.45%.
155

A comparative analysis on the predictive performance of LSTM and SVR on Bitcoin closing prices.

Rayyan, Hakim January 2022 (has links)
Bitcoin has since its inception in 2009 seen its market capitalisation rise to a staggering 846 billion US Dollars making it the world’s leading cryptocurrency. This has attracted financial analysts as well as researchers to experiment with different models with the aim of developing one capable of predicting Bitcoin closing prices. The aim of this thesis was to examine how well the LSTM and the SVR models performed in predicting Bitcoin closing prices. As a measure of performance, the RMSE, NRMSE and MAPE were used as well as the Random walk without drift as a benchmark to further contextualise the performance of both models. The empirical results show that the Random walk without drift yielded the best results for both the RMSE and NRMSE scoring 1624.638 and 0.02525, respectively while the LSTM outperformed both the Random Walk without drift and the SVR model in terms of the MAPE scoring 0.0272 against 0.0274 for both the Random walk without drift and SVR, respectively. Given the performance of the Random Walk against both models, it cannot be inferred that the LSTM and SVR models yielded statistically significant predictions. / <p>Aaron Green</p>
156

Traffic Signal Phase and Timing Prediction: A Machine Learning and Controller Logic Hybrid Approach

Eteifa, Seifeldeen Omar 14 March 2024 (has links)
Green light optimal speed advisory (GLOSA) systems require reliable estimates of signal switching times to improve vehicle energy/fuel efficiency. Deployment of successful infrastructure to vehicle communication requires Signal Phase and Timing (SPaT) messages to be populated with most likely estimates of switching times and confidence levels in these estimates. Obtaining these estimates is difficult for actuated signals where the length of each green indication changes to accommodate varying traffic conditions and pedestrian requests. This dissertation explores the different ways in which predictions can be made for the most likely switching times. Data are gathered from six intersections along the Gallows Road corridor in Northern Virginia. The application of long-short term memory neural networks for obtaining predictions is explored for one of the intersections. Different loss functions are tried for the purpose of prediction and a new loss function is devised. Mean absolute percentage error is found to be the best loss function in the short-term predictions. Mean squared error is the best for long-term predictions and the proposed loss function balances both well. The amount of historical data needed to make a single accurate prediction is assessed. The assessment concludes that the short-term prediction is accurate with only a 3 to 10 second time window in the past as long as the training dataset is large enough. Long term prediction, however, is better with a larger past time window. The robustness of LSTM models to different demand levels is then assessed utilizing the unique scenario created by the COVID-19 pandemic stay-at-home order. The study shows that the models are robust to the changing demands and while regularization does not really affect their robustness, L1 and L2 regularization can improve the overall prediction performance. An ensemble approach is used considering the use of transformers for SPaT prediction for the first time across the six intersections. Transformers are shown to outperform other models including LSTM. The ensemble provides a valuable metric to show the certainty level in each of the predictions through the level of consensus of the models. Finally, a hybrid approach integrating deep learning and controller logic is proposed by predicting actuations separately and using a digital twin to replicate SPaT information. The approach is proven to be the best approach with 58% less mean absolute error than other approaches. Overall, this dissertation provides a holistic methodology for predicting SPaT and the certainty level associated with it tailored to the existing technology and communication needs. / Doctor of Philosophy / Automated and connected vehicles waste a lot of fuel and energy to stop and go at traffic signals. The ideal case is for them to be able to know when the traffic signal turns green ahead of time and plan to reach the intersection by the time it is green, so they do not have to stop. Not having to stop can save up to 40 percent of the gas used at the intersection. This is a difficult task because the green time is not fixed. It has a minimum and maximum setting, and it keeps extending the green every time a new vehicle arrives. While this is good for adapting to traffic, it makes it difficult to know exactly when the traffic signal turns green to reach the intersection at that time. In this dissertation, different models to know ahead of time when the traffic signal will change are used. A model is chosen known as long-short term memory neural network (LSTM), which is a way to recognize how the traffic signal is expected to behave in the future from its past behavior. The point is to reduce the errors in the predictions. The first thing is to look at the loss function, which is how the model deals with error. It is found that the best thing is to take the average of the absolute value of the error as a percentage of the prediction if the prediction is that traffic signal will change soon. If it is a longer time until the traffic signal changes, the best way is to take the average of the square of the error. Finally, another function is introduced to balance between both. The second thing explored is how far back in time data was needed to be given to the model to predict accurately. For predictions of less than 20 seconds in the future, only 3 to 10 seconds in the past are needed. For predictions further in the future, looking further back can be useful. The third thing explored was how these models would do after rare events like COVID-19 pandemic. It was found that even though much fewer cars were passing through the intersections, the models still had low errors. Techniques were used to reduce the model reliance on specific data known as regularization techniques. This did not help the models to do better after COVID, but two techniques known as L1 and L2 regularization improved overall performance. The study was then expanded to include 6 intersections and used three additional models in addition to LSTM. One of these models, known as transformers, has never been used before for this problem and was shown to make better predictions than other models. The consensus between the models, which is how many of the models agree on the prediction, was used as a measure for certainty in the prediction. It was proven to be a good indicator. An approach is then introduced that combines the knowledge of the traffic signal controller logic with the powerful predictions of machine learning models. This is done by making a computer program that replicates the logic of the traffic signal controller known as a digital twin. Machine learning models are then used to predict vehicle arrivals. The program is then run using the predicted arrivals to provide a replication of the signal timing. This approach is found to be the best approach with 58 percent less error than the other approaches. Overall, this dissertation provides an end-to-end solution that uses real data generated from intersections to predict the time to green and estimate the certainty in prediction that can help automated and connected vehicles be more fuel efficient.
157

LEVERAGING MACHINE LEARNING FOR FAST PERFORMANCE PREDICTION FOR INDUSTRIAL SYSTEMS : Data-Driven Cache Simulator

Yaghoobi, Sharifeh January 2024 (has links)
This thesis presents a novel solution for CPU architecture simulation with a primary focus on cache miss prediction using machine learning techniques. The solution consists of two main components: a configurable application designed to generate detailed execution traces via DynamoRIO and a machine learning model, specifically a Long Short-Term Memory (LSTM) network, developed to predict cache behaviors based on these traces. The LSTM model was trained and validated using a comprehensive dataset derived from detailed trace analysis, which included various parameters like instruction sequences and memory access patterns. The model was tested against unseen datasets to evaluate its predictive accuracy and robustness. These tests were critical in demonstrating the model’s effectiveness in real-world scenarios, showing it could reliably predict cache misses with significant accuracy. This validation underscores the viability of machine learning-based methods in enhancing the fidelity of CPU architecture simulations. However, performance tests comparing the LSTM model and DynamoRIO revealed that while the LSTM achieves satisfactory accuracy, it does so at the cost of increased processing time. Specifically, the LSTM model processed 25 million instructions in 45 seconds, compared to DynamoRIO’s 41 seconds, with additional overheads for loading and executing the inference process. This highlights a critical trade-off between accuracy and simulation speed, suggesting areas for further optimization and efficiency improvements in future work.
158

Building occupancy analytics based on deep learning through the use of environmental sensor data

Zhang, Zheyu 24 May 2023 (has links)
Balancing indoor comfort and energy consumption is crucial to building energy efficiency. Occupancy information is a vital aspect in this process, as it determines the energy demand. Although there are various sensors used to gather occupancy information, environmental sensors stand out due to their low cost and privacy benefits. Machine learning algorithms play a critical role in estimating the relationship between occupancy levels and environmental data. To improve performance, more complex models such as deep learning algorithms are necessary. Long Short-Term Memory (LSTM) is a powerful deep learning algorithm that has been utilized in occupancy estimation. However, recently, an algorithm named Attention has emerged with improved performance. The study proposes a more effective model for occupancy level estimation by incorporating Attention into the existing Long Short-Term Memory algorithm. The results show that the proposed model is more accurate than using a single algorithm and has the potential to be integrated into building energy control systems to conserve even more energy. / Master of Science / The motivation for energy conservation and sustainable development is rapidly increasing, and building energy consumption is a significant part of overall energy use. In order to make buildings more energy efficient, it is necessary to obtain information on the occupancy level of rooms in the building. Environmental sensors are used to measure factors such as humidity and sound to determine occupancy information. However, the relationship between sensor readings and occupancy levels is complex, making it necessary to use machine learning algorithms to establish a connection. As a subfield of machine learning, deep learning is capable of processing complex data. This research aims to utilize advanced deep learning algorithms to estimate building occupancy levels based on environmental sensor data.
159

Stock Price Movement Prediction Using Sentiment Analysis and Machine Learning

Wang, Jenny Zheng 01 June 2021 (has links) (PDF)
Stock price prediction is of strong interest but a challenging task to both researchers and investors. Recently, sentiment analysis and machine learning have been adopted in stock price movement prediction. In particular, retail investors’ sentiment from online forums has shown their power to influence the stock market. In this paper, a novel system was built to predict stock price movement for the following trading day. The system includes a web scraper, an enhanced sentiment analyzer, a machine learning engine, an evaluation module, and a recommendation module. The system can automatically select the best prediction model from four state-of-the-art machine learning models (Long Short-Term Memory, Support Vector Machine, Random Forest, and Extreme Boost Gradient Tree) based on the acquired data and the models’ performance. Moreover, stock market lexicons were created using large-scale text mining on the Yahoo Finance Conversation boards and natural language processing. Experiments using the top 30 stocks on the Yahoo users’ watchlists and a randomly selected stock from NASDAQ were performed to examine the system performance and proposed methods. The experimental results show that incorporating sentiment analysis can improve the prediction for stocks with a large daily discussion volume. Long Short-Term Memory model outperformed other machine learning models when using both price and sentiment analysis as inputs. In addition, the Extreme Boost Gradient Tree (XGBoost) model achieved the highest accuracy using the price-only feature on low-volume stocks. Last but not least, the models using the enhanced sentiment analyzer outperformed the VADER sentiment analyzer by 1.96%.
160

On simulating and predicting pedestrian trajectories in a crowd

Bisagno, Niccolò 15 April 2020 (has links)
Crowds of people are gathering at multiple venues, such as concerts, political rallies, as well as in commercial malls, or just simply walking on the streets. More and more people are flocking to live in urban areas, thus generating a lot of scenarios of crowds. As a consequence, there is an increasing demand for automatic tools that can analyze and predict the behavior of crowds to ensure safety. Crowd motion analysis is a key feature in surveillance and monitoring applications, providing useful hints about potential threats to safety and security in urban and public spaces. It is well known that people gatherings are generally difficult to model, due to the diversity of the agents composing the crowd. Each individual is unique, being driven not only by the destination but also by personality traits and attitude. The domain of crowd analysis has been widely investigated in the literature. However, crowd gatherings have sometimes resulted in dangerous scenarios in recent years, such as stampedes or during dangerous situations. To take a step toward ensuring the safety of crowds, in this work we investigate two main research problems: we try to predict each person future position and we try to understand which are the key factors for simulating crowds. Predicting in advance how a mass of people will fare in a given space would help in ensuring the safety of public gatherings.

Page generated in 0.0291 seconds