• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 26
  • 12
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 112
  • 76
  • 62
  • 55
  • 42
  • 41
  • 35
  • 34
  • 29
  • 25
  • 23
  • 21
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Dataset Evaluation Method for Vehicle Detection Using TensorFlow Object Detection API / Utvärderingsmetod för dataset inom fordonsigenkänning med användning avTensorFlow Object Detection API

Furundzic, Bojan, Mathisson, Fabian January 2021 (has links)
Recent developments in the field of object detection have highlighted a significant variation in quality between visual datasets. As a result, there is a need for a standardized approach of validating visual dataset features and their performance contribution. With a focus on vehicle detection, this thesis aims to develop an evaluation method utilized for comparing visual datasets. This method was utilized to determine the dataset that contributed to the detection model with the greatest ability to detect vehicles. The visual datasets compared in this research were BDD100K, KITTI and Udacity, each one being trained on individual models. Applying the developed evaluation method, a strong indication of BDD100K's performance superiority was determined. Further analysis and feature extraction of dataset size, label distribution and average labels per image was conducted. In addition, real-world experimental conduction was performed in order to validate the developed evaluation method. It could be determined that all features and experimental results pointed to BDD100K's superiority over the other datasets, validating the developed evaluation method. Furthermore, the TensorFlow Object Detection API's ability to improve performance gain from a visual dataset was studied. Through the use of augmentations, it was concluded that the TensorFlow Object Detection API serves as a great tool to increase performance gain for visual datasets. / Inom fältet av objektdetektering har ny utveckling demonstrerat stor kvalitetsvariation mellan visuella dataset. Till följd av detta finns det ett behov av standardiserade valideringsmetoder för att jämföra visuella dataset och deras prestationsförmåga. Detta examensarbete har, med ett fokus på fordonsigenkänning, som syfte att utveckla en pålitlig valideringsmetod som kan användas för att jämföra visuella dataset. Denna valideringsmetod användes därefter för att fastställa det dataset som bidrog till systemet med bäst förmåga att detektera fordon. De dataset som användes i denna studien var BDD100K, KITTI och Udacity, som tränades på individuella igenkänningsmodeller. Genom att applicera denna valideringsmetod, fastställdes det att BDD100K var det dataset som bidrog till systemet med bäst presterande igenkänningsförmåga. En analys av dataset storlek, etikettdistribution och genomsnittliga antalet etiketter per bild var även genomförd. Tillsammans med ett experiment som genomfördes för att testa modellerna i verkliga sammanhang, kunde det avgöras att valideringsmetoden stämde överens med de fastställda resultaten. Slutligen studerades TensorFlow Object Detection APIs förmåga att förbättra prestandan som erhålls av ett visuellt dataset. Genom användning av ett modifierat dataset, kunde det fastställas att TensorFlow Object Detection API är ett lämpligt modifieringsverktyg som kan användas för att öka prestandan av ett visuellt dataset.
82

Co-designing Communication Middleware and Deep Learning Frameworks for High-Performance DNN Training on HPC Systems

Awan, Ammar Ahmad 10 September 2020 (has links)
No description available.
83

Book retrieval system : Developing a service for efficient library book retrievalusing particle swarm optimization

Woods, Adam January 2024 (has links)
Traditional methods for locating books and resources in libraries often entail browsing catalogsor manual searching that are time-consuming and inefficient. This thesis investigates thepotential of automated digital services to streamline this process, by utilizing Wi-Fi signal datafor precise indoor localization. Central to this study is the development of a model that employsWi-Fi signal strength (RSSI) and round-trip time (RTT) to estimate the locations of library userswith arm-length accuracy. This thesis aims to enhance the accuracy of location estimation byexploring the complex, nonlinear relationship between Received Signal Strength Indicator(RSSI) and Round-Trip Time (RTT) within signal fingerprints. The model was developed usingan artificial neural network (ANN) to capture the relationship between RSSI and RTT. Besides,this thesis introduces and evaluates the performance of a novel variant of the Particle SwarmOptimization (PSO) algorithm, named Randomized Particle Swarm Optimization (RPSO). Byincorporating randomness into the conventional PSO framework, the RPSO algorithm aims toaddress the limitations of the standard PSO, potentially offering more accurate and reliablelocation estimations. The PSO algorithms, including RPSO, were integrated into the trainingprocess of ANN to optimize the network’s weights and biases through direct optimization, aswell as to enhance the hyperparameters of the ANN’s built-in optimizer. The findings suggestthat optimizing the hyperparameters yields better results than direct optimization of weights andbiases. However, RPSO did not significantly enhance the performance compared to thestandard PSO in this context, indicating the need for further investigation into its application andpotential benefits in complex optimization scenarios.
84

Sequence-to-sequence learning of financial time series in algorithmic trading / Sekvens-till-sekvens-inlärning av finansiella tidsserier inom algoritmiskhandel

Arvidsson, Philip, Ånhed, Tobias January 2017 (has links)
Predicting the behavior of financial markets is largely an unsolved problem. The problem hasbeen approached with many different methods ranging from binary logic, statisticalcalculations and genetic algorithms. In this thesis, the problem is approached with a machinelearning method, namely the Long Short-Term Memory (LSTM) variant of Recurrent NeuralNetworks (RNNs). Recurrent neural networks are artificial neural networks (ANNs)—amachine learning algorithm mimicking the neural processing of the mammalian nervoussystem—specifically designed for time series sequences. The thesis investigates the capabilityof the LSTM in modeling financial market behavior as well as compare it to the traditionalRNN, evaluating their performances using various measures. / Prediktion av den finansiella marknadens beteende är i stort ett olöst problem. Problemet hartagits an på flera sätt med olika metoder så som binär logik, statistiska uträkningar ochgenetiska algoritmer. I den här uppsatsen kommer problemet undersökas medmaskininlärning, mer specifikt Long Short-Term Memory (LSTM), en variant av rekurrentaneurala nätverk (RNN). Rekurrenta neurala nätverk är en typ av artificiellt neuralt nätverk(ANN), en maskininlärningsalgoritm som ska efterlikna de neurala processerna hos däggdjursnervsystem, specifikt utformat för tidsserier. I uppsatsen undersöks kapaciteten hos ett LSTMatt modellera finansmarknadens beteenden och jämförs den mot ett traditionellt RNN, merspecifikt mäts deras effektivitet på olika vis.
85

Ljudklassificering med Tensorflow och IOT-enheter : En teknisk studie

Karlsson, David January 2020 (has links)
Artificial Inteligens and machine learning has started to get established as reco- gnizable terms to the general masses in their daily lives. Applications such as voice recognicion and image recognicion are used widely in mobile phones and autonomous systems such as self-drivning cars. This study examines how one can utilize this technique to classify sound as a complement to videosurveillan- ce in different settings, for example a busstation or other areas that might need monitoring. To be able to do this a technique called Convolution Neural Ne- twork has been used since this is a popular architecture to use when it comes to image classification. In this model every sound has a visual representation in form of a spectogram that showes frequencies over time. One of the main goals of this study has been to be able to apply this technique on so called IOT units to be able to classify sounds in real time, this because of the fact that these units are relativly affordable and requires little resources. A Rasberry Pi was used to run a prototype version using tensorflow & keras as base api ́s. The studys re- sults show which parts that are important to consider to be able to get a good and reliable system, for example which hardware and software that is needed to get started. The results also shows what factors is important to be able to stream live sound and get reliable results, a classification models architecture is very important where different layers and parameters can have a large impact on the end result. / Termer som Artificiell Intelligens och maskininlärning har under de senaste åren börjat etablera sig hos den breda massan och är numera någonting som på- verkar nästan alla människors vardagliga liv i någon form. Vanliga använd- ningsområden är röststyrning och bildigenkänning som bland annat används i mobiltelefoner och autonoma system som självkörande bilar med mera. Den här studien utforskar hur man kan använda sig av denna teknik för att kunna klassi- ficera ljud som ett komplement till videoövervakning i olika miljöer, till exem- pel på en busstation eller andra övervakningsobjekt. För att göra detta har en teknik kallad Convolution Neural Network använts, vilket är en mycket populär arkitektur att använda vid klassificering av bilder. I denna modell har varje ljud fått en visuell representation i form av ett spektogram som visar frekvenser över tid. Ett av huvudmålen med denna studie har varit att kunna applicera denna teknik på så kallade IOT-enheter för att klassificera ljud i realtid. Dessa är rela- tivt billiga och resurssnåla enheter vilket gör dem till ett attraktivt alternativ för detta ändamål. I denna studie används en Raspberry Pi för att köra en prototyp- version med Tensorflow & Keras som grund APIer. Studien visar bland annat på vilka moment och delar som är viktiga att tänka på för att få igång ett smidigt och pålitligt system, till exempel vilken hårdvara och mjukvara som krävs för att starta. Den visar också på vilka faktorer som spelar in för att kunna streama ljud med bra resultat, detta då en klassifikationsmodells arkitektur och upp- byggnad kan ha stor påverkan på slutresultatet.
86

Noise Reduction in Flash X-ray Imaging Using Deep Learning

Sundman, Tobias January 2018 (has links)
Recent improvements in deep learning architectures, combined with the strength of modern computing hardware such as graphics processing units, has lead to significant results in the field of image analysis. In this thesis work, locally connected architectures are employed to reduce noise in flash X-ray diffraction images. The layers in these architectures use convolutional kernels, but without shared weights. This combines the benefits of lower model memory footprint in convolutional networks with the higher model capacity of fully connected networks. Since the camera used to capture the diffraction images has pixelwise unique characteristics, and thus lacks equivariance, this compromise can be beneficial. The background images of this thesis work were generated with an active laser but without injected samples. Artificial diffraction patterns were then added to these background images allowing for training U-Net architectures to separate them. Architecture A achieved a performance of 0.187 on the test set, roughly translating to 35 fewer photon errors than a model similar to state of the art. After smoothing the photon errors this performance increased to 0.285, since the U-Net architectures managed to remove flares where state of the art could not. This could be taken as a proof of concept that locally connected networks are able to separate diffraction from background in flash X-Ray imaging.
87

Video Recommendation Based on Object Detection

Nyberg, Selma January 2018 (has links)
In this thesis, various machine learning domains have been combined in order to build a video recommender system that is based on object detection. The work combines two extensively studied research fields, recommender systems and computer vision, that also are rapidly growing and popular techniques on commercial markets. To investigate the performance of the approach, three different content-based recommender systems have been implemented at Spotify, which are based on the following video features: object detections, titles and descriptions, and user preferences. These systems have then been evaluated and compared against each other together with their hybridized result. Two algorithms have been implemented, the prediction and the top-N algorithm, where the former is the more reliable source for evaluating the system's performance. The evaluation of the system shows that the overall performance scores for predicting values of the users' liked and disliked videos are in the range from about 40 % to 70 % for the prediction algorithm and from about 15 % to 70 % for the top-N algorithm. The approach based on object detection performs worse in comparison to the other approaches. Hence, there seems to be is a low correlation between the user preferences and the video contents in terms of object detection data. Therefore, this data is not very suitable for describing the content of videos and using it in the recommender system. However, the results of this study cannot be generalized to apply for other systems before the approach has been evaluated in other environments and for various data sets. Moreover, there are plenty of room for refinements and improvements to the system, as well as there are many interesting research areas for future work.
88

探索類神經網路於網路流量異常偵測中的時效性需求 / Exploring the timeliness requirement of artificial neural networks in network traffic anomaly detection

連茂棋, Lian, Mao-Ci Unknown Date (has links)
雲端的盛行使得人們做任何事都要透過網路,但是總會有些有心人士使用一些惡意程式來創造攻擊或通過網絡連接竊取資料。為了防止這些網路惡意攻擊,我們必須不斷檢查網路流量資料,然而現在這個雲端時代,網路的資料是非常龐大且複雜,若要檢查所有網路資料不僅耗時而且非常沒有效率。 本研究使用TensorFlow與多個圖形處理器(Graphics Processing Unit, GPU)來實作類神經網路(Artificial Neural Networks, ANN)機制,用以分析網路流量資料,並得到一個可以判斷正常與異常網路流量的偵測規則,也設計一個實驗來驗證我們提出的類神經網路機制是否符合網路流向異常偵測的時效性和有效性。 在實驗過程中,我們發現使用更多的GPU可以減少訓練類神經網路的時間,並且在我們的實驗設計中使用三個GPU進行運算可以達到網路流量異常偵測的時效性。透過該方法得到的初步實驗結果,我們提出機制的結果優於使用反向傳播算法訓練類神經網路得到的結果。 / The prosperity of the cloud makes people do anything through the Internet, but there are people with bad intention to use some malicious programs to create attacks or steal information through the network connection. In order to prevent these cyber-attacks, we have to keep checking the network traffic information. However, in the current cloud environment, the network information is huge and complex that to check all the information is not only time-consuming but also inefficient. This study uses TensorFlow with multiple Graphic Processing Units (GPUs) to implement an Artificial Neural Networks (ANN) mechanism to analyze network traffic data and derive detection rules that can identify normal and malicious traffics, and we call it Network Traffic Anomaly Detection (NTAD). Experiments are also designed to verify the timeliness and effectiveness of the derived ANN mechanism. During the experiment, we found that using more GPUs can reduce training time, and using three GPUs to do the operation can meet the timeliness in NTAD. As a result of this method, the experiment result was better than ANN with back propagation mechanism.
89

A concept of an intent-based contextual chat-bot with capabilities for continual learning

Strutynskiy, Maksym January 2020 (has links)
Chat-bots are computer programs designed to conduct textual or audible conversations with a single user. The job of a chat-bot is to be able to find the best response for any request the user issues. The best response is considered to answer the question and contain relevant information while following grammatical and lexical rules. Modern chat-bots often have trouble accomplishing all these tasks. State-of-the-art approaches, such as deep learning, and large datasets help chat-bots tackle this problem better. While there is a number of different approaches that can be applied for different kind of bots, datasets of suitable size are not always available. In this work, we introduce and evaluate a method of expanding the size of datasets. This will allow chat-bots, in combination with a good learning algorithm, to achieve higher precision while handling their tasks. The expansion method uses the continual learning approach that allows the bot to expand its own dataset while holding conversations with its users. In this work we test continual learning with IBM Watson Assistant chat-bot as well as a custom case study chat-bot implementation. We conduct the testing using a smaller and a larger datasets to find out if continual learning stays effective as the dataset size increases. The results show that the more conversations the chat-bot holds, the better it gets at guessing the intent of the user. They also show that continual learning works well for larger and smaller datasets, but the effect depends on the specifics of the chat-bot implementation. While continual learning makes good results better, it also turns bad results into worse ones, thus the chat-bot should be manually calibrated should the precision of the original results, measured before the expansion, decrease.
90

Paralelní trénování hlubokých neuronových sítí / Parallel Deep Learning

Šlampa, Ondřej January 2017 (has links)
Aim of this thesis is to propose how to evaluate favourableness of parallel deep learning. In this thesis I analyze parallel deep learning and I focus on its length. I take into account gradient computation length and weight transportation length. Result of this thesis is proposal of equations, which can estimate the speedup on multiple workers. These equations can be used to determine ideal number of workers for training.

Page generated in 0.0486 seconds