• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 486
  • 247
  • 201
  • 191
  • 163
  • 139
  • 127
  • 112
  • 105
  • 102
  • 90
  • 88
  • 85
  • 83
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Applying Neural Networks for Tire Pressure Monitoring Systems

Kost, Alex 01 March 2018 (has links) (PDF)
A proof-of-concept indirect tire-pressure monitoring system is developed using neural net- works to identify the tire pressure of a vehicle tire. A quarter-car model was developed with Matlab and Simulink to generate simulated accelerometer output data. Simulation data are used to train and evaluate a recurrent neural network with long short-term memory blocks (RNN-LSTM) and a convolutional neural network (CNN) developed in Python with Tensorflow. Bayesian Optimization via SigOpt was used to optimize training and model parameters. The predictive accuracy and training speed of the two models with various parameters are compared. Finally, future work and improvements are discussed.
202

Text Steganalysis based on Convolutional Neural Networks

Akula, Tejasvi, Pamisetty, Varshitha January 2022 (has links)
The CNN-based steganalysis model is able to capture some complex statistical dependencies and also learn feature representations. The proposed model uses a word embedding layer to map the words into dense vectors thus, achieving more accurate representations of the words. The proposed model extracts both, the syntax and semantic features. Files having less than 200 words are referred to as short text. Preprocessing for short text is done through word segmenting and encoding the words into indexes according to the position of words in the dictionary. Once this is performed, the index sequences are fed to the CNN to learn the feature representations. Files containing over 200 words are considered as long texts. Considering the wide range of length variation of these long texts, the proposed model tokenized long texts into their sentence components with a relatively consistent length prior to preprocessing the data. Eventually, the proposed model uses a decision strategy to make the final decision to check if the text file contains stego text or not.
203

Artificial Neural Networks in Swedish Speech Synthesis / Artificiella neurala nätverk i svensk talsyntes

Näslund, Per January 2018 (has links)
Text-to-speech (TTS) systems have entered our daily lives in the form of smart assistants and many other applications. Contemporary re- search applies machine learning and artificial neural networks (ANNs) to synthesize speech. It has been shown that these systems outperform the older concatenative and parametric methods. In this paper, ANN-based methods for speech synthesis are ex- plored and one of the methods is implemented for the Swedish lan- guage. The implemented method is dubbed “Tacotron” and is a first step towards end-to-end ANN-based TTS which puts many differ- ent ANN-techniques to work. The resulting system is compared to a parametric TTS through a strength-of-preference test that is carried out with 20 Swedish speaking subjects. A statistically significant pref- erence for the ANN-based TTS is found. Test subjects indicate that the ANN-based TTS performs better than the parametric TTS when it comes to audio quality and naturalness but sometimes lacks in intelli- gibility. / Talsynteser, också kallat TTS (text-to-speech) används i stor utsträckning inom smarta assistenter och många andra applikationer. Samtida forskning applicerar maskininlärning och artificiella neurala nätverk (ANN) för att utföra talsyntes. Det har visats i studier att dessa system presterar bättre än de äldre konkatenativa och parametriska metoderna. I den här rapporten utforskas ANN-baserade TTS-metoder och en av metoderna implementeras för det svenska språket. Den använda metoden kallas “Tacotron” och är ett första steg mot end-to-end TTS baserat på neurala nätverk. Metoden binder samman flertalet olika ANN-tekniker. Det resulterande systemet jämförs med en parametriskt TTS genom ett graderat preferens-test som innefattar 20 svensktalande försökspersoner. En statistiskt säkerställd preferens för det ANN- baserade TTS-systemet fastställs. Försökspersonerna indikerar att det ANN-baserade TTS-systemet presterar bättre än det parametriska när det kommer till ljudkvalitet och naturlighet men visar brister inom tydlighet.
204

Mobile Object Detection using TensorFlow Lite and Transfer Learning / Objektigenkänning i mobila enheter med TensorFlow Lite

Alsing, Oscar January 2018 (has links)
With the advancement in deep learning in the past few years, we are able to create complex machine learning models for detecting objects in images, regardless of the characteristics of the objects to be detected. This development has enabled engineers to replace existing heuristics-based systems in favour of machine learning models with superior performance. In this report, we evaluate the viability of using deep learning models for object detection in real-time video feeds on mobile devices in terms of object detection performance and inference delay as either an end-to-end system or feature extractor for existing algorithms. Our results show a significant increase in object detection performance in comparison to existing algorithms with the use of transfer learning on neural networks adapted for mobile use. / Utvecklingen inom djuplärning de senaste åren innebär att vi är kapabla att skapa mer komplexa maskininlärningsmodeller för att identifiera objekt i bilder, oavsett objektens attribut eller karaktär. Denna utveckling har möjliggjort forskare att ersätta existerande heuristikbaserade algoritmer med maskininlärningsmodeller med överlägsen prestanda. Den här rapporten syftar till att utvärdera användandet av djuplärningsmodeller för exekvering av objektigenkänning i video på mobila enheter med avseende på prestanda och exekveringstid. Våra resultat visar på en signifikant ökning i prestanda relativt befintliga heuristikbaserade algoritmer vid användning  av djuplärning och överförningsinlärning i artificiella neurala nätverk.
205

Efficient Wearable Big Data Harnessing and Mining with Deep Intelligence

Elijah J Basile (13161057) 27 July 2022 (has links)
<p>Wearable devices and their ubiquitous use and deployment across multiple areas of health provide key insights in patient and individual status via big data through sensor capture at key parts of the individual’s body. While small and low cost, their limitations rest in their computational and battery capacity. One key use of wearables has been in individual activity capture. For accelerometer and gyroscope data, oscillatory patterns exist between daily activities that users may perform. By leveraging spatial and temporal learning via CNN and LSTM layers to capture both the intra and inter-oscillatory patterns that appear during these activities, we deployed data sparsification via autoencoders to extract the key topological properties from the data and transmit via BLE that compressed data to a central device for later decoding and analysis. Several autoencoder designs were developed to determine the principles of system design that compared encoding overhead on the sensor device with signal reconstruction accuracy. By leveraging asymmetric autoencoder design, we were able to offshore much of the computational and power cost of signal reconstruction from the wearable to the central devices, while still providing robust reconstruction accuracy at several compression efficiencies. Via our high-precision Bluetooth voltmeter, the integrated sparsified data transmission configuration was tested for all quantization and compression efficiencies, generating lower power consumption to the setup without data sparsification for all autoencoder configurations. </p> <p><br></p> <p>Human activity recognition (HAR) is a key facet of lifestyle and health monitoring. Effective HAR classification mechanisms and tools can provide healthcare professionals, patients, and individuals key insights into activity levels and behaviors without the intrusive use of human or camera observation. We leverage both spatial and temporal learning mechanisms via CNN and LSTM integrated architectures to derive an optimal classification architecture that provides robust classification performance for raw activity inputs and determine that a LSTMCNN utilizing a stacked-bidirectional LSTM layer provides superior classification performance to the CNNLSTM (also utilizing a stacked-bidirectional LSTM) at all input widths. All inertial data classification frameworks are based off sensor data drawn from wearable devices placed at key sections of the body. With the limitation of wearable devices being a lack of computational and battery power, data compression techniques to limit the quantity of transmitted data and reduce the on-board power consumption have been employed. While this compression methodology has been shown to reduce overall device power consumption, this comes at a cost of more-or-less information loss in the reconstructed signals. By employing an asymmetric autoencoder design and training the LSTMCNN classifier with the reconstructed inputs, we minimized the classification performance degradation due to the wearable signal reconstruction error The classifier is further trained on the autoencoder for several input widths and with quantized and unquantized models. The performance for the classifier trained on reconstructed data ranged between 93.0\% and 86.5\% accuracy dependent on input width and autoencoder quantization, showing promising potential of deep learning with wearable sparsification. </p>
206

Object Recognition in Satellite imagesusing improved ConvolutionalRecurrent Neural Network

NATTALA, TARUN January 2023 (has links)
Background:The background of this research lies in detecting the images from satellites. The recognition of images from satellites has become increasingly importantdue to the vast amount of data that can be obtained from satellites. This thesisaims to develop a method for the recognition of images from satellites using machinelearning techniques. Objective:The main objective of this thesis is a unique approach to recognizingthe data with a CRNN algorithm that involves image recognition in satellite imagesusing machine learning, specifically the CRNN (Convolutional Recurrent Neural Network) architecture. The main task is classifying the images accurately, and this isachieved by utilizing object classification algorithms. The CRNN architecture ischosen because it can effectively extract features from satellite images using Convolutional Blocks and leverage the great memory power of the Long Short-TermMemory (LSTM) networks to connect the extracted features efficiently. The connected features improve the accuracy of our model significantly. Method:The proposed method involves doing a literature review to find currentimage recognition models and then experimentation by training a CRNN, CNN andRNN and then comparing their performance using metrics mentioned in the thesis work. Results:The performance of the proposed method is evaluated using various metrics, including precision, recall, F1 score and inference speed, on a large dataset oflabeled images. The results indicate that high accuracy is achieved in detecting andclassifying objects in satellite images through our approach. The potential utilization of our proposed method can span various applications such as environmentalmonitoring, urban planning, and disaster management. Conclusion:The classification on the satellite images is performed using the 2 datasetsfor ships and cars. The proposed architectures are CRNN, CNN, and RNN. These3 models are compared in order to find the best performing algorithm. The resultsindicate that CRNN has the best accuracy and precision and F1 score and inferencespeed, indicating a strong performance by the CRNN. Keywords: Comparison of CRNN, CNN, and RNN, Image recognition, MachineLearning, Algorithms,You Only Look Once. Version3, Satellite images, Aerial Images, Deep Learning
207

Biological Semantic Segmentation on CT Medical Images for Kidney Tumor Detection Using nnU-Net Framework

Bergsneider, Andres 01 March 2021 (has links) (PDF)
Healthcare systems are constantly challenged with bottlenecks due to human-reliant operations, such as analyzing medical images. High precision and repeatability is necessary when performing a diagnostics on patients with tumors. Throughout the years an increasing number of advancements have been made using various machine learning algorithms for the detection of tumors helping to fast track diagnosis and treatment decisions. “Black Box” systems such as the complex deep learning networks discussed in this paper rely heavily on hyperparameter optimization in order to obtain the most ideal performance. This requires a significant time investment in the tuning of such networks to acquire cutting-edge results. The approach of this paper relies on implementing a state of the art deep learning framework, the nn-UNet, in order to label computed tomography (CT) images from patients with kidney cancer through semantic segmentation by feeding raw CT images through a deep architecture and obtaining pixel-wise mask classifications. Taking advantage of nn-UNet’s framework versatility, various configurations of the architecture are explored and applied, benchmarking and assorting resulting performance, including variations of 2D and 3D convolutions as well as the use of distinct cost functions such as the Sørensen-Dice coefficient, Cross Entropy, and a compound of them. 79% is the accuracy currently reported for the detection of benign and malign tumors using CT imagery performed by medical practitioners. The best iteration and mixture of parameters in this work resulted in an accuracy of 83% for tumor labelling. This study has further exposed the performance of a versatile and groundbreaking approach to deep learning framework designed for biomedical image segmentation.
208

Performance analysis: CNN model on smartphones versus on cloud : With focus on accuracy and execution time

Klas, Stegmayr, Edwin, Johansson January 2023 (has links)
In the modern digital landscape, mobile devices serve as crucial data generators.Their usage spans from simple communication to various applications such as userbehavior analysis and intelligent applications. However, privacy concerns associatedwith data collection are persistent. Deep learning technologies, specifically Convo-lutional Neural Networks, have been increasingly integrated into mobile applicationsas a promising solution. In this study, we evaluated the performance of a CNN im-plemented on iOS smartphones using the CIFAR-10 data set, comparing the model’saccuracy and execution time before and after conversion for on-device deployment.The overarching objective was not to design the most accurate model but to inves-tigate the feasibility of deploying machine learning models on-device while retain-ing their accuracy. The results revealed that both on-cloud and on-device modelsyielded high accuracy (93.3% and 93.25%, respectively). However, a significantdifference was observed in the total execution time, with the on-device model re-quiring a considerably longer duration (45.64 seconds) than the cloud-based model(4.55 seconds). This study provides insights into the performance of deep learningmodels on iOS smartphones, aiding in understanding their practical applications andlimitations.
209

Predicting inflow and infiltration to wastewater networks based on temperature measurements

Åsell, Martin January 2024 (has links)
Sewer pipelines are deteriorating due to aging and sub optimal material selections, leading to the infiltration of clean ground and rainfall water into the pipes. It is estimated that a significant portion (up to 40-50%) of the water entering wastewater treatment plants is actually clean infiltrated water. This infiltration not only contributes to unnecessary energy consumption but also poses the risk of flooding the sewer network and treatment plants. Finding these broken pipes is utmost importance but is not straight forward due to the pipes being located a few meters below ground. There exist methods of pinpointing where these leaks occur, but they are often time consuming and expensive. This thesis seeks to address the following question; Can the estimation of infiltration be accomplished solely through the temperature data obtained from discrete pump stations, or is the inclusion of precipitation data essential for achieving accurate results? Two machine learning algorithms are investigated to solve the regression problem of estimating the amount of rainfall derived infiltration. The first model is a classical linear regression model. The second model is a Convolutional neural network (CNN). Both of these models are trained on the same data set. The temperatures recorded at the stations are reliable and can be trusted. However, the data labeling process involves utilizing calculated flows to the stations during both dry and wet weather periods. This means that the labels of the data cannot be trusted to be the actual ground truth, and there exists an uncertainty in the data set. Both models manage to capture large temperature drops which indicates infiltration has occurred. The linear regression model seems to be too sensitive towards small temperature drops and predicts infiltration when there is none. The CNN model on the other hand seems to be able to capture only large temperature drops when infiltration occurs. However, both models are trained with data from only one station, this means that the models are biased towards the average temperature of that particular station, other stations may have a higher or lower average temperature. When testing the models on a different station with lower average temperature the models predict infiltration when there is none.
210

Objectively recognizing human activity in body-worn sensor data with (more or less) deep neural networks / Objektiv igenkänning av mänsklig aktivitet från accelerometerdata med (mer eller mindre) djupa neurala nätverk

Broomé, Sofia January 2017 (has links)
This thesis concerns the application of different artificial neural network architectures on the classification of multivariate accelerometer time series data into activity classes such as sitting, lying down, running, or walking. There is a strong correlation between increased health risks in children and their amount of daily screen time (as reported in questionnaires). The dependency is not clearly understood, as there are no such dependencies reported when the sedentary (idle) time is measured objectively. Consequently, there is an interest from the medical side to be able to perform such objective measurements. To enable large studies the measurement equipment should ideally be low-cost and non-intrusive. The report investigates how well these movement patterns can be distinguished given a certain measurement setup and a certain network structure, and how well the networks generalise to noisier data. Recurrent neural networks are given extra attention among the different networks, since they are considered well suited for data of sequential nature. Close to state-of-the-art results (95% weighted F1-score) are obtained for the tasks with 4 and 5 classes, which is notable since a considerably smaller number of sensors is used than in the previously published results. Another contribution of this thesis is that a new labeled dataset with 12 activity categories is provided, consisting of around 6 hours of recordings, comparable in number of samples to benchmarking datasets. The data collection was made in collaboration with the Department of Public Health at Karolinska Institutet. / Inom ramen för uppsatsen testas hur väl rörelsemönster kan urskiljas ur accelerometerdatamed hjälp av den gren av maskininlärning som kallas djupinlärning; där djupa artificiellaneurala nätverk av noder funktionsapproximerar mappandes från domänen av sensordatatill olika fördefinerade kategorier av aktiviteter så som gång, stående, sittande eller liggande.Det finns ett intresse från den medicinska sidan att kunna mäta fysisk aktivitet objektivt,bland annat eftersom det visats att det finns en korrelation mellan ökade hälsorisker hosbarn och deras mängd daglig skärmtid. Denna typ av mätningar ska helst kunna göras medicke-invasiv utrustning till låg kostnad för att kunna göra större studier.Enklare nätverksarkitekturer samt återimplementeringar av bästa möjliga teknik inomområdet Mänsklig aktivitetsigenkänning (HAR) testas både på ett benchmarkingdataset ochpå egeninhämtad data i samarbete med Institutet för Folkhälsovetenskap på Karolinska Institutetoch resultat redovisas för olika val av möjliga klassificeringar och olika antal dimensionerper mätpunkt. De uppnådda resultaten (95% F1-score) på ett 4- och 5-klass-problem ärjämförbara med de bästa tidigare publicerade resultaten för aktivitetsigenkänning, vilket äranmärkningsvärt då då betydligt färre accelerometrar har använts här än i de åsyftade studierna.Förutom klassificeringsresultaten som redovisas bidrar det här arbetet med ett nyttinhämtat och kategorimärkt dataset; KTH-KI-AA. Det är jämförbart i antal datapunkter medspridda benchmarkingdataset inom HAR-området.

Page generated in 0.1145 seconds