• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Deep Learning Approach To Vehicle Fault Detection Based On Vehicle Behavior

Khaliqi, Rafi, Iulian, Cozma January 2023 (has links)
Vehicles and machinery play a crucial role in our daily lives, contributing to our transportationneeds and supporting various industries. As society strives for sustainability, the advancementof technology and efficient resource allocation become paramount. However, vehicle faultscontinue to pose a significant challenge, leading to accidents and unfortunate consequences.In this thesis, we aim to address this issue by exploring the effectiveness of an ensemble ofdeep learning models for supervised classification. Specifically, we propose to evaluate the performance of 1D-CNN-Bi-LSTM and 1D-CNN-Bi-GRU models. The Bi-LSTM and Bi-GRUmodels incorporate a multi-head attention mechanism to capture intricate patterns in the data.The methodology involves initial feature extraction using 1D-CNN, followed by learning thetemporal dependencies in the time series data using Bi-LSTM and Bi-GRU. These models aretrained and evaluated on a labeled dataset, yielding promising results. The successful completion of this thesis has met the objectives and scope of the research, and it also paves the way forfuture investigations and further research in this domain.
12

Incorporating Sparse Attention Mechanism into Transformer for Object Detection in Images / Inkludering av gles attention i en transformer för objektdetektering i bilder

Duc Dao, Cuong January 2022 (has links)
DEtection TRansformer, DETR, introduces an innovative design for object detection based on softmax attention. However, the softmax operation produces dense attention patterns, i.e., all entries in the attention matrix receive a non-zero weight, regardless of their relevance for detection. In this work, we explore several alternatives to softmax to incorporate sparsity into the architecture of DETR. Specifically, we replace softmax with a sparse transformation from the α-entmax family: sparsemax and entmax-1.5, which induce a set amount of sparsity, and α-entmax, which treats sparsity as a learnable parameter of each attention head. In addition to evaluating the effect on detection performance, we examine the resulting attention maps from the perspective of explainability. To this end, we introduce three evaluation metrics to quantify the sparsity, complementing the qualitative observations. Although our experimental results on the COCO detection dataset do not show an increase in detection performance, we find that learnable sparsity provides more flexibility to the model and produces more explicative attention maps. To the best of our knowledge, we are the first to introduce learnable sparsity into the architecture of transformer-based object detectors. / DEtection Transformer, DETR, introducerar en innovativ design för objektdetektering baserad på softmax attention. Softmax producerar tät attention, alla element i attention-matrisen får en vikt skild från noll, oberoende av deras relevans för objektdetektering. Vi utforskar flera alternativ till softmax för att inkludera gleshet i DETRs arkitektur. Specifikt så ersätter vi softmax med en gles transformation från α-entmax familjen: sparsemax och entmax1.5, vilka inducerar en fördefinierad mängd gleshet, och α-entmax, som ser gleshet som en träningsbar parameter av varje attention-huvud. Förutom att evaluera effekten på detekteringsprestandan, så utforskar vi de resulterande attention-matriserna från ett förklarbarhetsperspektiv. Med det som mål så introducerar vi tre olika metriker för att evaluera gleshet, som ett komplement till de kvalitativa observationerna. Trots att våra experimentella resultat på COCO, ett utmanande dataset för objektdetektering, inte visar en ökning i detekteringsprestanda, så finner vi att träningsbar gleshet ökar modellens flexibilitet, och producerar mer förklarbara attentionmatriser. Såvitt vi vet så är vi de första som introducerar träningsbar gleshet i transformer-baserade arkitekturer för objektdetektering.
13

Use of improved Deep Learning and DeepSORT for Vehicle estimation / Användning av förbättrad djupinlärning och DeepSORT för fordonsuppskattning

Zheng, Danna January 2022 (has links)
Intelligent Traffic System (ITS) has high application value in nowadays vehicle surveillance and future applications such as automated driving. The crucial part of ITS is to detect and track vehicles in real-time video stream with high accuracy and low GPU consumption. In this project, we select the YOLO version4 (YOLOv4) one-stage deep learning detector to generate bounding boxes with vehicle classes and location as well as confidence value, we select Simple Online and Realtime Tracking with a Deep Association Metric (DeepSORT) tracker to track vehicles using the output of YOLOv4 detector. Furthermore, in order to make the detector more adaptive to practical use, especially when the vehicle is small or obscured, we improved the detector’s structure by adding attention mechanisms and reducing parameters to detect vehicles with relatively high accuracy and low GPU memory usage. With the baseline model, results show that the YOLOv4 and DeepSORT vehicle detection could achieve 82.4% mean average precision among three vehicle classes with 63.945 MB parameters under 19.98 frames per second. After optimization, the improved model could achieve 85.84% mean average precision among three detection classes with 44.158MB parameters under 18.65 frames per second. Compared with original YOLOv4, the improved YOLOv4 detector could increase the mean average precision by 3.44% and largely reduced the parameters by 30.94% as well as maintaining high detection speed. This proves the validity and high applicability of the proposed improved YOLOv4 detector. / Intelligenta trafiksystem har ett stort tillämpningsvärde i dagens fordonsövervakning och framtida tillämpningar som t.ex. automatiserad körning. Den avgörande delen av systemet är att upptäcka och spåra fordon i videoströmmar i realtid med hög noggrannhet och låg GPU-förbrukning. I det här projektet väljer vi YOLOv4-detektorn för djupinlärning i ett steg för att generera avgränsande rutor med fordonsklasser och lokalisering samt konfidensvärde, och vi väljer DeepSORT-tracker för att spåra fordon med hjälp av YOLOv4-detektorns resultat. För att göra detektorn mer anpassningsbar för praktisk användning, särskilt när fordonet är litet eller dolt, förbättrade vi dessutom detektorns struktur genom att lägga till uppmärksamhetsmekanismer och minska parametrarna för att upptäcka fordon med relativt hög noggrannhet och låg GPU-minneanvändning. Med basmodellen visar resultaten att YOLOv4 och DeepSORT fordonsdetektering kunde uppnå en genomsnittlig genomsnittlig precision på 82.4 % bland tre fordonsklasser med 63.945 MB parametrar under 19.98 bilder per sekund. Efter optimering kunde den förbättrade modellen uppnå 85.84% genomsnittlig precision bland tre detektionsklasser med 44.158 MB parametrar under 18.65 bilder per sekund. Jämfört med den ursprungliga YOLOv4-detektorn kunde den förbättrade YOLOv4-detektorn öka den genomsnittliga precisionen med 3.44 % och minska parametrarna med 30.94%, samtidigt som den bibehöll en hög detektionshastighet. Detta visar att den föreslagna förbättrade YOLOv4-detektorn är giltig och mycket användbar.
14

Exploring the Use of Attention for Generation Z Fashion Style Recognition with User Annotations as Labels / Undersökande av uppmärksamhet för igenkänning av Generation Z:s klädstilar med användarannoteringar som träningsetiketter

Samakovlis, Niki January 2023 (has links)
As e-commerce and online shopping have increased worldwide, the interest and research of intelligent fashion systems have expanded. Given the competitive nature of the fashion market business, digital marketplaces depend on determining customer preferences. The fashion preferences of the next generation of consumers, Generation Z, are highly discovered on social media, where new fashion styles have emerged. For digital marketplaces to gain the attraction of Generation Z consumers, an understanding of their fashion style preferences may be crucial. However, fashion style recognition remains challenging due to the subjective nature of fashion styles. Previous research has approached the task by fine-tuning pre-trained convolutional neural networks (CNNs). The disadvantage of this approach is that a CNN leveraged on its own fails to find subtle visual differences between clothing items. Hence, this thesis seeks to approach the clothing style recognition task as a fine-grained image recognition task by incorporating a component that allows the model to focus on specific parts of the input images, referred to as an attention mechanism, into the network. Specifically, a convolutional block attention module (CBAM) is added to a CNN. Based on the results, it is concluded that the fine-tuned CNN without the attention module achieves superior performance. In contrast, qualitative analysis conducted on GradCAM visualizations shows that the attention mechanism aids the CNN in capturing discriminative features, while the network without the attention module tends to make predictions based on dataset bias. For a fair comparison, future work should involve extending this research by refining the dataset or using an additional dataset. / I takt med att e-handel har ökat världen över har intresset och forskningen för intelligenta modesystem ökat. Modemarknadens konkurrenskraft har gjort digitala marknadsplatser beroende av att bestämma deras kunders preferenser. Modepreferenserna för nästa generations konsumenter, Generation Z, upptäcks ofta på sociala medier, där nya klädstilar har skapats. För att digitala marknadsplatser ska kunna locka Generation Z kan en förståelse för deras klädstilpreferenser vara avgörande. Igenkänning av klädstilar är dock fortfarande svårt på grund av klädtilars subjektiva natur. Tidigare forskning har finjusterat faltningsnätverk. Nackdelen med detta tillvägagångssätt är att ett faltningsnätverk som utnyttjas på egen hand inte lyckas hitta dem subtila visuella skillnader mellan klädesplagg. Därför definierar denna avhandling problemet som finkornig bildigenkänning genom att addera en komponent som gör att modellen kan fokusera på specifika delar av bilderna, kallad en uppmärksamhetsmekanism, i nätverket. Specifikt läggs en convolutional block attention module (CBAM) till i arkitekturen av ett faltningsnätverk. Baserat på resultaten dras slutsatsen att det finjusterade faltningsnätverket utan uppmärksamhetsmekanismen uppnår överlägsen prestanda. Däremot visar kvalitativ analys utförd på Grad-CAMvisualiseringar att uppmärksamhetsmekanismen hjälper faltningsnätverket att fokusera på de diskriminerande egenskaperna, medan nätverket utan uppmärksamhetsmekanismen tenderar att klassificera baserat på bias i inputdatan. För en rättvis jämförelse bör framtida arbete innebära ett förfinande av datamängden eller använda en ytterligare datamängd.
15

Self-Supervised Representation Learning for Content Based Image Retrieval

Govindarajan, Hariprasath January 2020 (has links)
Automotive technologies and fully autonomous driving have seen a tremendous growth in recent times and have benefitted from extensive deep learning research. State-of-the-art deep learning methods are largely supervised and require labelled data for training. However, the annotation process for image data is time-consuming and costly in terms of human efforts. It is of interest to find informative samples for labelling by Content Based Image Retrieval (CBIR). Generally, a CBIR method takes a query image as input and returns a set of images that are semantically similar to the query image. The image retrieval is achieved by transforming images to feature representations in a latent space, where it is possible to reason about image similarity in terms of image content. In this thesis, a self-supervised method is developed to learn feature representations of road scenes images. The self-supervised method learns feature representations for images by adapting intermediate convolutional features from an existing deep Convolutional Neural Network (CNN). A contrastive approach based on Noise Contrastive Estimation (NCE) is used to train the feature learning model. For complex images like road scenes where mutiple image aspects can occur simultaneously, it is important to embed all the salient image aspects in the feature representation. To achieve this, the output feature representation is obtained as an ensemble of feature embeddings which are learned by focusing on different image aspects. An attention mechanism is incorporated to encourage each ensemble member to focus on different image aspects. For comparison, a self-supervised model without attention is considered and a simple dimensionality reduction approach using SVD is treated as the baseline. The methods are evaluated on nine different evaluation datasets using CBIR performance metrics. The datasets correspond to different image aspects and concern the images at different spatial levels - global, semi-global and local. The feature representations learned by self-supervised methods are shown to perform better than the SVD approach. Taking into account that no labelled data is required for training, learning representations for road scenes images using self-supervised methods appear to be a promising direction. Usage of multiple query images to emphasize a query intention is investigated and a clear improvement in CBIR performance is observed. It is inconclusive whether the addition of an attentive mechanism impacts CBIR performance. The attention method shows some positive signs based on qualitative analysis and also performs better than other methods for one of the evaluation datasets containing a local aspect. This method for learning feature representations is promising but requires further research involving more diverse and complex image aspects.
16

Temporal Localization of Representations in Recurrent Neural Networks

Najam, Asadullah January 2023 (has links)
Recurrent Neural Networks (RNNs) are pivotal in deep learning for time series prediction, but they suffer from 'exploding values' and 'gradient decay,' particularly when learning temporally distant interactions. Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) have addressed these issues to an extent, but the precise mitigating mechanisms remain unclear. Moreover, the success of feedforward neural networks in time series tasks using an 'attention mechanism' raises questions about the solutions offered by LSTMs and GRUs. This study explores an alternative explanation for the challenges faced by RNNs in learning long-range correlations in the input data. Could the issue lie in the movement of the representations - how hidden nodes store and process information - across nodes instead of localization? Evidence presented suggests that RNNs can indeed possess "moving representations," with certain training conditions reducing this movement. These findings point towards the necessity of further research on localizing representations.
17

Exploring Graph Neural Networks for Clustering and Classification

Tahabi, Fattah Muhammad 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Graph Neural Networks (GNNs) have become excessively popular and prominent deep learning techniques to analyze structural graph data for their ability to solve complex real-world problems. Because graphs provide an efficient approach to contriving abstract hypothetical concepts, modern research overcomes the limitations of classical graph theory, requiring prior knowledge of the graph structure before employing traditional algorithms. GNNs, an impressive framework for representation learning of graphs, have already produced many state-of-the-art techniques to solve node classification, link prediction, and graph classification tasks. GNNs can learn meaningful representations of graphs incorporating topological structure, node attributes, and neighborhood aggregation to solve supervised, semi-supervised, and unsupervised graph-based problems. In this study, the usefulness of GNNs has been analyzed primarily from two aspects - clustering and classification. We focus on these two techniques, as they are the most popular strategies in data mining to discern collected data and employ predictive analysis.
18

Biodiversity Monitoring Using Machine Learning for Animal Detection and Tracking / Övervakning av biologisk mångfald med hjälp av maskininlärning för upptäckt och spårning av djur

Zhou, Qian January 2023 (has links)
As an important indicator of biodiversity and ecological environment in a region, the number and distribution of animals has been given more and more attention by agencies such as nature reserves, wetland parks, and animal protection supervision departments. To protect biodiversity, we need to be able to detect and track the movement of animals to understand which animals are visiting the space. This thesis uses the improved You Only Look Once Version 5 (YOLOv5) target detection algorithm and Simple online and real-time tracking with a deep association metric (DeepSORT) tracking algorithm to provide technical support for bird monitoring, identification and tracking. Specifically, the thesis tries different improvement methods based on YOLOv5 to solve the problem that small targets in images are difficult to detect. In the backbone network, different attention modules are added to enhance the network feature extraction ability; in the neck network part, the Bi-Directional Feature Pyramid Network (BiFPN) structure is used to replace the Path Aggregation Network (PAN) structure to strengthen the utilization of underlying features; in the detection head part, a high-resolution detection head is added to improve the detection ability of tiny targets. In addition, a better loss function has been used to improve the algorithm’s performance on small birds. The improved algorithms in this paper have been used in multiple comparative experiments on the VisDrone data set and a data set of bird flight images, and the results show that compared with the baseline using YOLOv5, for VisDrone data set, Spatial-to-Depth (SPD)-Convolutional stride-free (Conv) gets the highest training mean Average Precision (mAP) of all methods with an increase from 0.325 to 0.419; for the bird data set, the best result of training mAP that could be achieved is adding a P2 layer, which reaches an improvement from 0.701 to 0.724. After combining the You Only Look Once (YOLO) with DeepSORT to implement the tracking function, the improved method makes the final tracking effect better. / Som en viktig indikator på biologisk mångfald och ekologisk miljö i en region har antal och utbredning av djur uppmärksammats mer och mer av organisationer som som naturreservat, våtmarksparker och djurskyddsmyndigheter. För att skydda den biologiska mångfalden måste vi kunna upptäcka och spåra djurs rörelser för att förstå vilka djur som besöker ett område. Uppsatsen använder den förbättrade YOLOv5-måldetektionsalgoritmen och DeepSORT-spårningsalgoritmen för fågelövervakning, identifiering och spårning. Specifikt undersöks olika förbättringsmetoder baserade på YOLOv5 för att lösa problemet med att små mål i bilder är svåra att upptäcka. I den första delen av nätverket läggs olika uppmärksamhetsmoduler till; i nästa används BiFPN-strukturen för att ersätta PAN-strukturen; i detektionsdelen läggs ett högupplöst detektionshuvud till för att förbättra detekteringsförmågan för små föremål. Dessutom har en bättre förlustfunktion använts för att förbättra algoritmens prestanda för små fåglar och andra djur. De förbättrade algoritmerna har testats flera jämförande experiment på VisDronedatamängden och en datamängd av bilder av flygande fåglar. Resultaten visar att jämfört med baslinjen med YOLOv5s, för VisDrone-datamängden får SPD-Conv det högsta tränings-mAP med en ökning från 0,325 till 0,419; för fågeldatamängden nås det bästa resultatet genom att lägga till ett P2-lager, vilket ger en förbättring från 0,701 till 0,724 av mAP. Efter att ha kombinerat YOLO med DeepSORT för att implementera spårningsfunktionen, blir den slutliga spårningseffekten bättre.
19

Ambient Temperature Estimation : Exploring Machine Learning Models for Ambient TemperatureEstimation Using Mobile’s Internal Sensors

Omar, Alfakir January 2024 (has links)
Ambient temperature poses a significant challenge to the performance of mobile phones, impacting their internal thermal flow and increasing the likelihood of overheating, leading to a compromised user experience. The knowledge about the ambient temperature in mobile phones is crucial as it assists engineers in correlating external factors with internal factors that might affect the mobile's performance under various conditions. Notably, these devices lack dedicated sensors to measure ambient temperature independently, underscoring the need for innovative solutions to estimate it accurately.      In response to this challenge, our research investigates the feasibility of estimating ambient temperature using machine-learning algorithms based on data from internal thermal sensors in Sony mobile phones.  Through comprehensive data collection and analysis, custom datasets were constructed to simulate different use-case scenarios, including CPU workloads, camera operation, and GPU tasks. These scenarios introduced varying levels of thermal disturbance, providing a robust basis for evaluating model performance. Feature engineering played a pivotal role in ensuring that the models could effectively interpret the internal thermal dynamics and correlate them with the ambient temperature. The results demonstrate that while simpler models like Linear Regression offer computational efficiency, they fall short in scenarios with complex thermal patterns. In contrast, deep learning models, particularly those incorporating time series analysis, showed superior accuracy and robustness. The Attention-LSTM model, in particular, excelled in generalizing across diverse and novel thermal conditions, although its complexity poses challenges for on-device deployment. This research underscores the importance of selecting appropriate sensors and incorporating a wide range of training scenarios to enhance model performance. It also highlights the potential of advanced machine learning techniques in providing advance solutions for ambient temperature estimation, thereby contributing to more effective thermal management in mobile devices.
20

Réseaux de neurones profonds appliqués à la compréhension de la parole / Deep learning applied to spoken langage understanding

Simonnet, Edwin 12 February 2019 (has links)
Cette thèse s'inscrit dans le cadre de l'émergence de l'apprentissage profond et aborde la compréhension de la parole assimilée à l'extraction et à la représentation automatique du sens contenu dans les mots d'une phrase parlée. Nous étudions une tâche d'étiquetage en concepts sémantiques dans un contexte de dialogue oral évaluée sur le corpus français MEDIA. Depuis une dizaine d'années, les modèles neuronaux prennent l'ascendant dans de nombreuses tâches de traitement du langage naturel grâce à des avancées algorithmiques ou à la mise à disposition d'outils de calcul puissants comme les processeurs graphiques. De nombreux obstacles rendent la compréhension complexe, comme l'interprétation difficile des transcriptions automatiques de la parole étant donné que de nombreuses erreurs sont introduites par le processus de reconnaissance automatique en amont du module de compréhension. Nous présentons un état de l'art décrivant la compréhension de la parole puis les méthodes d'apprentissage automatique supervisé pour la résoudre en commençant par des systèmes classiques pour finir avec des techniques d'apprentissage profond. Les contributions sont ensuite exposées suivant trois axes. Premièrement, nous développons une architecture neuronale efficace consistant en un réseau récurent bidirectionnel encodeur-décodeur avec mécanisme d’attention. Puis nous abordons la gestion des erreurs de reconnaissance automatique et des solutions pour limiter leur impact sur nos performances. Enfin, nous envisageons une désambiguïsation de la tâche de compréhension permettant de rendre notre système plus performant. / This thesis is a part of the emergence of deep learning and focuses on spoken language understanding assimilated to the automatic extraction and representation of the meaning supported by the words in a spoken utterance. We study a semantic concept tagging task used in a spoken dialogue system and evaluated with the French corpus MEDIA. For the past decade, neural models have emerged in many natural language processing tasks through algorithmic advances or powerful computing tools such as graphics processors. Many obstacles make the understanding task complex, such as the difficult interpretation of automatic speech transcriptions, as many errors are introduced by the automatic recognition process upstream of the comprehension module. We present a state of the art describing spoken language understanding and then supervised automatic learning methods to solve it, starting with classical systems and finishing with deep learning techniques. The contributions are then presented along three axes. First, we develop an efficient neural architecture consisting of a bidirectional recurrent network encoder-decoder with attention mechanism. Then we study the management of automatic recognition errors and solutions to limit their impact on our performances. Finally, we envisage a disambiguation of the comprehension task making the systems more efficient.

Page generated in 0.1898 seconds