• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 246
  • 16
  • 6
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 318
  • 174
  • 154
  • 127
  • 114
  • 111
  • 87
  • 82
  • 77
  • 77
  • 64
  • 59
  • 59
  • 58
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Forecasting Human Response in The loop with Eco-Driving Advanced Driver Assistance Systems (ADAS): A Modeling and Experimental Study

Jacome, Olivia M. 06 September 2022 (has links)
No description available.
142

Efficient Wearable Big Data Harnessing and Mining with Deep Intelligence

Elijah J Basile (13161057) 27 July 2022 (has links)
<p>Wearable devices and their ubiquitous use and deployment across multiple areas of health provide key insights in patient and individual status via big data through sensor capture at key parts of the individual’s body. While small and low cost, their limitations rest in their computational and battery capacity. One key use of wearables has been in individual activity capture. For accelerometer and gyroscope data, oscillatory patterns exist between daily activities that users may perform. By leveraging spatial and temporal learning via CNN and LSTM layers to capture both the intra and inter-oscillatory patterns that appear during these activities, we deployed data sparsification via autoencoders to extract the key topological properties from the data and transmit via BLE that compressed data to a central device for later decoding and analysis. Several autoencoder designs were developed to determine the principles of system design that compared encoding overhead on the sensor device with signal reconstruction accuracy. By leveraging asymmetric autoencoder design, we were able to offshore much of the computational and power cost of signal reconstruction from the wearable to the central devices, while still providing robust reconstruction accuracy at several compression efficiencies. Via our high-precision Bluetooth voltmeter, the integrated sparsified data transmission configuration was tested for all quantization and compression efficiencies, generating lower power consumption to the setup without data sparsification for all autoencoder configurations. </p> <p><br></p> <p>Human activity recognition (HAR) is a key facet of lifestyle and health monitoring. Effective HAR classification mechanisms and tools can provide healthcare professionals, patients, and individuals key insights into activity levels and behaviors without the intrusive use of human or camera observation. We leverage both spatial and temporal learning mechanisms via CNN and LSTM integrated architectures to derive an optimal classification architecture that provides robust classification performance for raw activity inputs and determine that a LSTMCNN utilizing a stacked-bidirectional LSTM layer provides superior classification performance to the CNNLSTM (also utilizing a stacked-bidirectional LSTM) at all input widths. All inertial data classification frameworks are based off sensor data drawn from wearable devices placed at key sections of the body. With the limitation of wearable devices being a lack of computational and battery power, data compression techniques to limit the quantity of transmitted data and reduce the on-board power consumption have been employed. While this compression methodology has been shown to reduce overall device power consumption, this comes at a cost of more-or-less information loss in the reconstructed signals. By employing an asymmetric autoencoder design and training the LSTMCNN classifier with the reconstructed inputs, we minimized the classification performance degradation due to the wearable signal reconstruction error The classifier is further trained on the autoencoder for several input widths and with quantized and unquantized models. The performance for the classifier trained on reconstructed data ranged between 93.0\% and 86.5\% accuracy dependent on input width and autoencoder quantization, showing promising potential of deep learning with wearable sparsification. </p>
143

Human Path Prediction Using Auto Encoder LSTMs and Single Temporal Encoders

Hudgins, Hayden 01 January 2020 (has links) (PDF)
Due to automation, the world is changing at a rapid pace. Autonomous agents have become more common over the last several years and, as a result, have created a need for improved software to back them up. The most important aspect of this greater software is path prediction, as robots need to be able to decide where to move in the future. In order to accomplish this, a robot must know how to avoid humans, putting frame prediction at the core of many modern day solutions. A popular way to solve this complex problem of frame prediction is Auto Encoder LSTMs. Though there are many implementations of this, at its core, it is a neural network comprised of a series of time sensitive processing blocks that shrink and then grow the data’s dimensions to make a prediction. The idea of using Auto Encoder styled networks to do frame prediction has also been adapted by others to make Temporal Encoders. These neural networks work much like traditional Auto Encoders, in which the data is reduced then expanded back up. These networks attempt to tease out a series of frames, including a predictive frame of the future. The problem with many of these networks is that they take an immense amount of computation power, and time to get them performing at an acceptable level. This thesis presents possible ways of pre-processing input frames to these networks in order to gain performance, in the best case seeing a 360x improvement in accuracy compared to the original models. This thesis also extends the work done with Temporal Encoders to create more precise prediction models, which showed consistent improvements of at least 50% for some metrics. All of the generated models were compared using a simulated data set collected from recordings of ground level viewpoints from Cities: Skylines. These predicted frames were then analyzed using a common perceptual distance metric, that is, Minkowski distance, as well as a custom metric that tracked distinct areas in frames. All of the following was run on a constrained system in order to see the effects of the changes as they pertain to systems with limited hardware access.
144

Predicting the Options Expiration Effect Using Machine Learning Models Trained With Gamma Exposure Data / Prediktion av inverkan på aktiemarknaden då optioner upphör med hjälp av maskininlärningsmodeller tränade med dagliga GEX värden

Dubois, Alexander January 2022 (has links)
The option expiration effect is a well-studied phenome, however, few studies have implemented machine learning models to predict the effect on the underlying stock market due to options expiration. In this paper four machine learning models, SVM, random forest, AdaBoost, and LSTM, are evaluated on their ability to predict whether the underlying index rises or not on the day of option expiration. The options expiration effect is mainly driven by portfolio rebalancing made by market makers who aim to maintain delta-neutral portfolios. Whether or not market makers need to rebalance their portfolios depend on at least two variables; gamma and open interest. Hence, the machine learning models in this study use gamma exposure (i.e. a combination of gamma and open interest) to predict the options expiration effect. Furthermore, four architectures of LSTM are implemented and evaluated. The study shows that a three-layered many-to-one LSTM model achieves superior results with an F1 score of 62%. However, none of the models achieved better predictions than a model that predicts only positive classes. Some of the problems regarding gamma exposure are discussed and possible improvements for future studies are given. / Flera studier har visat att optionsmarknaden påverkar aktiemarknaden, speciellt vid optioners utgångsdatum. Dock har få studier undersökt maskininlärningsmodellers förmåga att förutse denna effekt. I den här studien, implementeras och utvärderas fyra olika maskininlärningsmodeller, SVM, random forest, AdaBoost, och LSTM, med syftet att förutse om den underliggande aktiemarknaden stiger vid optioners utgångsdatum. Att optionsmarknaden påverkar aktiemarknaden vid optioners utgångsdatum beror på att market makers ombalanserar sina portföljer för att bibehålla en delta-neutral portfölj. Market makers behov av att ombalansera sina portföljer beror på åtminstone två variabler; gamma och antalet aktiva optionskontrakt. Därmed använder maskininlärningsmodellerna i denna studie GEX, som är en kombination av gamma och antalet aktiva optionskontrakt, med syftet att förutse om marknaden stiger vid optioners utgångsdatum. Vidare implementeras och utvärderas fyra olika varianter av LSTM modeller. Studien visar att en many-to-one LSTM modell med tre lager uppnådde bäst resultat med ett F1 score på 62%. Dock uppnådde ingen av modellerna bättre resultat än en modell som predicerar endast positiva klasser. Avslutningsvis diskuteras problematiken med att använda GEX och rekommendationer för framtida studier ges.
145

Big Data in Small Tunnels : Turning Alarms Into Intelligence

Olli, Oscar January 2020 (has links)
In this thesis we examine methods for evaluating a traffic alarm system. Nuisance alarms can quickly increase the volume of alarms experienced by the alarm operator and obstruct their work. We propose two methods for removing a number of these nuisance alarms, so that events of higher priority can be targeted. A parallel correlation analysis demonstrated significant correlation between single and clusters of alarms, presenting a strong cause for causality. While a serial correlation was performed, it could not conclude evidence of consequential alarms. In order to assist Trafikverket with maintenance scheduling, a long short-term model (LSTM) model, to predict univariate time-series of discretely binned alarm sequences. Experiments conclude that the LSTM model provides higher precision for alarm sequences with higher repeatability and recurring patterns. For other, randomly occurring alarms, the model performs unsatisfactory. / Den här examensuppsatsen granskar olika metoder för att utvärdera ett larmsystem med inriktning mot trafiksäkerhet. Störande larm kan skapa stora mängder larm som försvårar arbetet för larmoperatörer. Vi föreslår två metoder för att avlägsna störande larm, så att uppmärksamhet kan riktas mot varningar med högre prioritet. En parallell korrelationsanalys som demonstrerade hög korrelation mellan både enskilda och kluster av larm. Detta presenterar ett starkt orsakssamband. En korskorrelation utfördes även, men denna kunde inte fastställa existens av s.k. följdlarm. För att assistera Trafikverket med schemaläggning av underhåll har en long short-term memory (LSTM) modell implementerats för att förutspå univariata tidsserier av diskretiserade larmsekvenser. Utförda experiment sammanfattar att LSTM modellen presterar bättre för larmsekvenser med återkommande mönster. För mera slumpmässigt genererade larmsekvenser, presterar modellen med lägre precision.
146

Objectively recognizing human activity in body-worn sensor data with (more or less) deep neural networks / Objektiv igenkänning av mänsklig aktivitet från accelerometerdata med (mer eller mindre) djupa neurala nätverk

Broomé, Sofia January 2017 (has links)
This thesis concerns the application of different artificial neural network architectures on the classification of multivariate accelerometer time series data into activity classes such as sitting, lying down, running, or walking. There is a strong correlation between increased health risks in children and their amount of daily screen time (as reported in questionnaires). The dependency is not clearly understood, as there are no such dependencies reported when the sedentary (idle) time is measured objectively. Consequently, there is an interest from the medical side to be able to perform such objective measurements. To enable large studies the measurement equipment should ideally be low-cost and non-intrusive. The report investigates how well these movement patterns can be distinguished given a certain measurement setup and a certain network structure, and how well the networks generalise to noisier data. Recurrent neural networks are given extra attention among the different networks, since they are considered well suited for data of sequential nature. Close to state-of-the-art results (95% weighted F1-score) are obtained for the tasks with 4 and 5 classes, which is notable since a considerably smaller number of sensors is used than in the previously published results. Another contribution of this thesis is that a new labeled dataset with 12 activity categories is provided, consisting of around 6 hours of recordings, comparable in number of samples to benchmarking datasets. The data collection was made in collaboration with the Department of Public Health at Karolinska Institutet. / Inom ramen för uppsatsen testas hur väl rörelsemönster kan urskiljas ur accelerometerdatamed hjälp av den gren av maskininlärning som kallas djupinlärning; där djupa artificiellaneurala nätverk av noder funktionsapproximerar mappandes från domänen av sensordatatill olika fördefinerade kategorier av aktiviteter så som gång, stående, sittande eller liggande.Det finns ett intresse från den medicinska sidan att kunna mäta fysisk aktivitet objektivt,bland annat eftersom det visats att det finns en korrelation mellan ökade hälsorisker hosbarn och deras mängd daglig skärmtid. Denna typ av mätningar ska helst kunna göras medicke-invasiv utrustning till låg kostnad för att kunna göra större studier.Enklare nätverksarkitekturer samt återimplementeringar av bästa möjliga teknik inomområdet Mänsklig aktivitetsigenkänning (HAR) testas både på ett benchmarkingdataset ochpå egeninhämtad data i samarbete med Institutet för Folkhälsovetenskap på Karolinska Institutetoch resultat redovisas för olika val av möjliga klassificeringar och olika antal dimensionerper mätpunkt. De uppnådda resultaten (95% F1-score) på ett 4- och 5-klass-problem ärjämförbara med de bästa tidigare publicerade resultaten för aktivitetsigenkänning, vilket äranmärkningsvärt då då betydligt färre accelerometrar har använts här än i de åsyftade studierna.Förutom klassificeringsresultaten som redovisas bidrar det här arbetet med ett nyttinhämtat och kategorimärkt dataset; KTH-KI-AA. Det är jämförbart i antal datapunkter medspridda benchmarkingdataset inom HAR-området.
147

IMBALANCED TIME SERIES FORECASTING AND NEURAL TIME SERIES CLASSIFICATION

Chen, Xiaoqian 01 August 2023 (has links) (PDF)
This dissertation will focus on the forecasting and classification of time series. Specifically, the forecasting problem will focus on imbalanced time series (ITS) which contain a mix of a mix of low probability extreme observations and high probability normal observations. Two approaches are proposed to improve the forecasting of ITS. In the first approach proposed in chapter 2, an ITS will be modelled as a composition of normal and extreme observations, the input predictor variables and the associated forecast output will be combined into moving blocks, and the blocks will be categorized as extreme event (EE) or normal event (NE) blocks. Imbalance will be decreased by oversampling the minority EE blocks and undersampling the majority NE blocks using modifications of block bootstrapping and synthetic minority oversampling technique (SMOTE). Convolution neural networks (CNNs) and long-short term memory (LSTMs) will be selected for forecast modelling. In the second approach described in chapter 3, which focuses on improving the forecasting accuracies LSTM models, a training strategy called Circular-Shift Circular Epoch Training (CSET), is proposed to preserve the natural ordering of observations in epochs during training without any attempt to balance the extreme and normal observations. The strategy will be universal because it could be applied to train LSTMs to forecast events in normal time series or in imbalanced time series in exactly the same manner. The CSET strategy will be formulated for both univariate and multivariate time series forecasting. The classification problem will focus on the classification event-related potential neural time series by exploiting information offered by the cone of influence (COI) of the continuous wavelet transform (CWT). The COI is a boundary that is superimposed on the wavelet scalogram to delineate the coefficients that are accurate from those that are inaccurate due to edge effects. The features derived from the inaccurate coefficients are, therefore, unreliable. It is hypothesized that the classifier performance would improve if unreliable features, which are outside the COI, are zeroed out, and the performance would improve even further if those features are cropped out completely. Two CNN multidomain models will be introduced to fuse the multichannel Z-scalograms and the V-scalograms. In the first multidomain model, referred to as the Z-CuboidNet, the input to the CNN will be generated by fusing the Z-scalograms of the multichannel ERPs into a frequency-time-spatial cuboid. In the second multidomain model, referred to as the V-MatrixNet, the CNN input will be formed by fusing the frequency-time vectors of the V-scalograms of the multichannel ERPs into a frequency-time-spatial matrix.
148

On modelling OMXS30 stocks - comparison between ARMA models and neural networks

Zarankina, Irina January 2023 (has links)
This thesis compares the results of the performance of the statistical Autoregressive integrated moving average (ARIMA) model and the neural network Long short-term model (LSTM) on a data set, which represents a market index. Both models are used to predict monthly, daily, and minute close prices of the OMX Stockholm 30 Index. Chosen data were preprocessed, models were fitted to data and their prediction was evaluated and compared. To evaluate forecast accuracy as well as to compare two models fitted to a financial time series, we have used the two performance measures: mean square error (MSE) and mean absolute percentage error (MAPE). In addition, the computation time of fitting models was measured in this thesis to evaluate and compare the computational workload associated with the two models. Also, other factors were discussed, such as the number of parameters and explainability. The analysis revealed that the minute and the daily data of the OMX 30 Stockholm index closely resembled white noise, indicating random fluctuations. However, for the monthly data, the LSTM model outperformed the ARIMA model in terms of MSE, with values of 15,230 and 14,380, respectively. Additionally, the LSTM model demonstrated superior capability in capturing the dynamics of price movement compared to ARIMA. Regarding MAPE, both models exhibited similar values, with ARIMA at 4.8 and LSTM at 4.9. In addition, the ARIMA model had significantly fewer parameters compared to the LSTM model and offered the advantages of being more transparent and easier to interpret.
149

ACCELERATED CELLULAR TRACTION CALCULATION BY PREDICTIONS USING DEEP LEARNING

Ibn Shafi, Md. Kamal 01 December 2023 (has links) (PDF)
This study presents a novel approach for predicting future cellular traction in a time series. The proposed method leverages two distinct look-ahead Long Short-Term Memory (LSTM) models—one for cell boundary and the other for traction data—to achieve rapid and accurate predictions. These LSTM models are trained using real Fourier Transform Traction Cytometry (FTTC) output data, ensuring consistency and reliability in the underlying calculations. To account for variability among cells, each cell is trained separately, mitigating generalized errors. The predictive performance is demonstrated by accurately forecasting tractions for the next 30-time instances, with an error rate below 7%. Moreover, a strategy for real-time traction calculations is proposed, involving the capture of a bead reference image before cell placement in a controlled environment. By doing so, we eliminate the need for cell removal and enable real-time calculation of tractions. Combining these two ideas, our tool speeds up the traction calculations 1.6 times, leveraging from limiting TFM use. As a walk forward, prediction method is implemented by combining prediction values with real data for future prediction, it is indicative of more speedup. The predictive capabilities of this approach offer valuable insights, with potential applications in identifying cancerous cells based on their traction behavior over time.Additionally, we present an advanced cell boundary detection algorithm that autonomously identifies cell boundaries from obscure cell images, reducing human intervention and bias. This algorithm significantly streamlines data collection, enhancing the efficiency and accuracy of our methodology.
150

Machine Translation Through the Creation of a Common Embedding Space

Sandvick, Joshua, Sandvick 11 December 2018 (has links)
No description available.

Page generated in 0.0767 seconds