• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 26
  • 12
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 112
  • 76
  • 62
  • 55
  • 42
  • 41
  • 35
  • 34
  • 29
  • 25
  • 23
  • 21
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Ghosts of Our Past: Neutrino Direction Reconstruction Using Deep Neural Networks

Stjärnholm, Sigfrid January 2021 (has links)
Neutrinos are the perfect cosmic messengers when it comes to investigating the most violent and mysterious astronomical and cosmological events in the Universe. The interaction probability of neutrinos is small, and the flux of high-energy neutrinos decreases quickly with increasing energy. In order to find high-energy neutrinos, large bodies of matter needs to be instrumented. A proposed detector station design called ARIANNA is designed to detect neutrino interactions in the Antarctic ice by measuring radio waves that are created due to the Askaryan effect. In this paper, we present a method based on state-of-the-art machine learning techniques to reconstruct the direction of the incoming neutrino, based on the radio emission that it produces. We trained a neural network with simulated data, created with the NuRadioMC framework, and optimized it to make the best possible predictions. The number of training events used was on the order of 106. Using two different emission models, we found that the network was able to learn and generalize on the neutrino events with good precision, resulting in a resolution of 4-5°. The model could also make good predictions on a dataset even if it was trained with another emission model. The results produced are promising, especially due to the fact that classical techniques have not been able to reproduce the same results without having prior knowledge of where the neutrino interaction took place. The developed neural network can also be used to assess the performance of other proposed detector designs, to quickly and reliably give an indication of which design might yield the most amount of value to the scientific community. / Neutriner är de perfekta kosmiska budbärarna när det kommer till att undersöka de mest våldsamma och mystiska astronomiska och kosmologiska händelserna i vårt universum. Sannolikheten för en neutrinointeraktion är dock liten, och flödet av högenergetiska neutriner minskar kraftigt med energin. För att hitta dessa högenergetiska neutriner måste stora volymer av materia instrumenteras. Ett förslag på en design för en detektorstation kallas ARIANNA, och är framtagen för att detektera neutrinointeraktioner i den antarktiska isen genom att mäta radiopulser som bildas på grund av Askaryan-effekten. I denna rapport presenterar vi en metod baserad på toppmoderna maskininlärningstekniker för att rekonstruera riktningen på en inkommande neutrino, utifrån den radiostrålning som produceras. Vi tränade ett neuralt nätverk med simulerade data, som skapades med hjälp av ramverket NuRadioMC, och optimerade nätverket för att göra så bra förutsägelser som möjligt. Antalet interaktionshändelser som användes för att träna nätverket var i storleksordningen 106. Genom att undersöka två olika emissionsmodeller fann vi att nätverket kunde generalisera med god precision. Detta resulterade i en upplösning på 4-5°. Modellen kunde även göra goda förutsägelser på en datamängd trots att nätverket var tränat med en annan emissionsmodell. De resultat som metoden framtog är lovande, särskilt med avseende på att tidigare klassiska metoder inte har lyckats reproducera samma resultat utan att metoden redan innan vet var i isen som neutrinointeraktionen skedde. Nätverket kan också komma att användas för att utvärdera prestandan hos andra designförslag på detektorstationer för att snabbt och säkert ge en indikation på vilken design som kan tillhandahålla mest vetenskapligt värde.
102

Scenanalys - Övervakning och modellering

Ali, Hani, Sunnergren, Pontus January 2021 (has links)
Självkörande fordon kan minska trafikstockningar och minska antalet trafikrelaterade olyckor. Då det i framtiden kommer att finnas miljontals autonoma fordon krävs en bättre förståelse av omgivningen. Syftet med detta projekt är att skapa ett externt automatiskt trafikledningssystem som kan upptäcka och spåra 3D-objekt i en komplex trafiksituation för att senare skicka beteendet från dessa objekt till ett större projekt som hanterar med att 3D-modellera trafiksituationen. Projektet använder sig av Tensorflow ramverket och YOLOv3 algoritmen. Projektet använder sig även av en kamera för att spela in trafiksituationer och en dator med Linux som operativsystem. Med hjälp av metoder som vanligen används för att skapa ett automatiserat trafikledningssystem utvärderades ett målföljningssystem. De slutliga resultaten visar att systemet är relativt instabilt och ibland inte kan känna igen vissa objekt. Om fler bilder används för träningsprocessen kan ett robustare och mycket mer tillförlitligt system utvecklas med liknande metodik. / Autonomous vehicles can decrease traffic congestion and reduce the amount of traffic related accidents. As there will be millions of autonomous vehicles in the future, a better understanding of the environment will be required. This project aims to create an external automated traffic system that can detect and track 3D objects within a complex traffic situation to later send these objects’ behavior for a larger-scale project that manages to 3D model the traffic situation. The project utilizes Tensorflow framework and YOLOv3 algorithm. The project also utilizes a camera to record traffic situations and a Linux operated computer. Using methods commonly used to create an automated traffic management system was evaluated. The final results show that the system is relatively unstable and can sometimes fail to recognize certain objects. If more images are used for the training process, a more robust and much more reliable system could be developed using a similar methodology.
103

Distinguishing Behavior from Highly Variable Neural Recordings Using Machine Learning

Sasse, Jonathan Patrick 04 June 2018 (has links)
No description available.
104

Wildfire Spread Prediction Using Attention Mechanisms In U-Net

Shah, Kamen Haresh, Shah, Kamen Haresh 01 December 2022 (has links) (PDF)
An investigation into using attention mechanisms for better feature extraction in wildfire spread prediction models. This research examines the U-net architecture to achieve image segmentation, a process that partitions images by classifying pixels into one of two classes. The deep learning models explored in this research integrate modern deep learning architectures, and techniques used to optimize them. The models are trained on 12 distinct observational variables derived from the Google Earth Engine catalog. Evaluation is conducted with accuracy, Dice coefficient score, ROC-AUC, and F1-score. This research concludes that when augmenting U-net with attention mechanisms, the attention component improves feature suppression and recognition, improving overall performance. Furthermore, employing ensemble modeling reduces bias and variation, leading to more consistent and accurate predictions. When inferencing on wildfire propagation at 30-minute intervals, the architecture presented in this research achieved a ROC-AUC score of 86.2% and an accuracy of 82.1%.
105

ML implementation for analyzing and estimating product prices / ML implementation för analys och estimation av produktpriser

Kenea, Abel Getachew, Fagerslett, Gabriel January 2024 (has links)
Efficient price management is crucial for companies with many different products to keep track of, leading to the common practice of price logging. Today, these prices are often adjusted manually, but setting prices manually can be labor-intensive and prone to human error. This project aims to use machine learning to assist in the pricing of products by estimating the prices to be inserted. Multiple machine learning models have been tested, and an artificial neural network has been implemented for estimating prices effectively. Through additional experimentation, the design of the network was fine-tuned to make it compatible with the project’s needs. The libraries used for implementing and managing the machine learning models are mainly ScikitLearn and TensorFlow. As a result, the trained model has been saved into a file and integrated with an API for accessibility.
106

應用機器學習於標準普爾指數期貨 / An application of machine learning to Standard & Poor's 500 index future.

林雋鈜, Lin, Jyun-Hong Unknown Date (has links)
本系統係藉由分析歷史交易資料來預測S&P500期貨市場之漲幅。 我們改進了Tsaih et al. (1998)提出的混和式AI系統。 該系統結合了Rule Base 系統以及類神經網路作為其預測之機制。我們針對該系統在以下幾點進行改善:(1) 將原本的日期資料改為使用分鐘資料作為輸入。(2) 本研究採用了“移動視窗”的技術,在移動視窗的概念下,每一個視窗我們希望能夠在60分鐘內訓練完成。(3)在擴增了額外的變數 – VIX價格做為系統的輸入。(4) 由於運算量上升,因此本研究利用TensorFlow 以及GPU運算來改進系統之運作效能。 我們發現VIX變數確實可以改善系統之預測精準度,但訓練的時間雖然平均低於60分鐘,但仍有部分視窗的時間會小幅超過60分鐘。 / The system is made to predict the Futures’ trend through analyzing the transaction data in the past, and gives advices to the investors who are hesitating to make decisions. We improved the system proposed by Tsaih et al. (1998), which was called hybrid AI system. It was combined with rule-based system and artificial neural network system, which can give suggestions depends on the past data. We improved the hybrid system with the following aspects: (1) The index data are changed from daily-based in into the minute-based in this study. (2) The “moving-window” mechanism is adopted in this study. For each window, we hope we can finish training in 60 minutes. (3) There is one extra variable VIX, which is calculated by the VIX in this study. (4) Due to the more computation demand, TensorFlow and GPU computing is applied in our system. We discover that the VIX can obviously has positively influence of the predicting performance of our proposed system. The average training time is lower than 60 minutes, however, some of the windows still cost more than 60 minutes to train.
107

Meření podobnosti obrazů s pomocí hlubokého učení / Image similarity measuring using deep learning

Štarha, Dominik January 2018 (has links)
This master´s thesis deals with the reseach of technologies using deep learning method, being able to use when processing image data. Specific focus of the work is to evaluate the suitability and effectiveness of deep learning when comparing two image input data. The first – theoretical – part consists of the introduction to neural networks and deep learning. Also, it contains a description of available methods, their benefits and principles, used for processing image data. The second - practical - part of the thesis contains a proposal a appropriate model of Siamese networks to solve the problem of comparing two input image data and evaluating their similarity. The output of this work is an evaluation of several possible model configurations and highlighting the best-performing model parameters.
108

Detekce dopravních značek a semaforů / Detection of Traffic Signs and Lights

Oškera, Jan January 2020 (has links)
The thesis focuses on modern methods of traffic sign detection and traffic lights detection directly in traffic and with use of back analysis. The main subject is convolutional neural networks (CNN). The solution is using convolutional neural networks of YOLO type. The main goal of this thesis is to achieve the greatest possible optimization of speed and accuracy of models. Examines suitable datasets. A number of datasets are used for training and testing. These are composed of real and synthetic data sets. For training and testing, the data were preprocessed using the Yolo mark tool. The training of the model was carried out at a computer center belonging to the virtual organization MetaCentrum VO. Due to the quantifiable evaluation of the detector quality, a program was created statistically and graphically showing its success with use of ROC curve and evaluation protocol COCO. In this thesis I created a model that achieved a success average rate of up to 81 %. The thesis shows the best choice of threshold across versions, sizes and IoU. Extension for mobile phones in TensorFlow Lite and Flutter have also been created.
109

Segmentace lézí roztroušené sklerózy pomocí hlubokých neuronových sítí / Segmentation of multiple sclerosis lesions using deep neural networks

Sasko, Dominik January 2021 (has links)
Hlavným zámerom tejto diplomovej práce bola automatická segmentácia lézií sklerózy multiplex na snímkoch MRI. V rámci práce boli otestované najnovšie metódy segmentácie s využitím hlbokých neurónových sietí a porovnané prístupy inicializácie váh sietí pomocou preneseného učenia (transfer learning) a samoriadeného učenia (self-supervised learning). Samotný problém automatickej segmentácie lézií sklerózy multiplex je veľmi náročný, a to primárne kvôli vysokej nevyváženosti datasetu (skeny mozgov zvyčajne obsahujú len malé množstvo poškodeného tkaniva). Ďalšou výzvou je manuálna anotácia týchto lézií, nakoľko dvaja rozdielni doktori môžu označiť iné časti mozgu ako poškodené a hodnota Dice Coefficient týchto anotácií je približne 0,86. Možnosť zjednodušenia procesu anotovania lézií automatizáciou by mohlo zlepšiť výpočet množstva lézií, čo by mohlo viesť k zlepšeniu diagnostiky individuálnych pacientov. Našim cieľom bolo navrhnutie dvoch techník využívajúcich transfer learning na predtrénovanie váh, ktoré by neskôr mohli zlepšiť výsledky terajších segmentačných modelov. Teoretická časť opisuje rozdelenie umelej inteligencie, strojového učenia a hlbokých neurónových sietí a ich využitie pri segmentácii obrazu. Následne je popísaná skleróza multiplex, jej typy, symptómy, diagnostika a liečba. Praktická časť začína predspracovaním dát. Najprv boli skeny mozgu upravené na rovnaké rozlíšenie s rovnakou veľkosťou voxelu. Dôvodom tejto úpravy bolo využitie troch odlišných datasetov, v ktorých boli skeny vytvárané rozličnými prístrojmi od rôznych výrobcov. Jeden dataset taktiež obsahoval lebku, a tak bolo nutné jej odstránenie pomocou nástroju FSL pre ponechanie samotného mozgu pacienta. Využívali sme 3D skeny (FLAIR, T1 a T2 modality), ktoré boli postupne rozdelené na individuálne 2D rezy a použité na vstup neurónovej siete s enkodér-dekodér architektúrou. Dataset na trénovanie obsahoval 6720 rezov s rozlíšením 192 x 192 pixelov (po odstránení rezov, ktorých maska neobsahovala žiadnu hodnotu). Využitá loss funkcia bola Combo loss (kombinácia Dice Loss s upravenou Cross-Entropy). Prvá metóda sa zameriavala na využitie predtrénovaných váh z ImageNet datasetu na enkodér U-Net architektúry so zamknutými váhami enkodéra, resp. bez zamknutia a následného porovnania s náhodnou inicializáciou váh. V tomto prípade sme použili len FLAIR modalitu. Transfer learning dokázalo zvýšiť sledovanú metriku z hodnoty približne 0,4 na 0,6. Rozdiel medzi zamknutými a nezamknutými váhami enkodéru sa pohyboval okolo 0,02. Druhá navrhnutá technika používala self-supervised kontext enkodér s Generative Adversarial Networks (GAN) na predtrénovanie váh. Táto sieť využívala všetky tri spomenuté modality aj s prázdnymi rezmi masiek (spolu 23040 obrázkov). Úlohou GAN siete bolo dotvoriť sken mozgu, ktorý bol prekrytý čiernou maskou v tvare šachovnice. Takto naučené váhy boli následne načítané do enkodéru na aplikáciu na náš segmentačný problém. Tento experiment nevykazoval lepšie výsledky, s hodnotou DSC 0,29 a 0,09 (nezamknuté a zamknuté váhy enkodéru). Prudké zníženie metriky mohlo byť spôsobené použitím predtrénovaných váh na vzdialených problémoch (segmentácia a self-supervised kontext enkodér), ako aj zložitosť úlohy kvôli nevyváženému datasetu.
110

Detekce objektů v laserových skenech pomocí konvolučních neuronových sítí / Object Detection in the Laser Scans Using Convolutional Neural Networks

Marko, Peter January 2021 (has links)
This thesis is aimed at detection of lines of horizontal road markings from a point cloud, which was obtained using mobile laser mapping. The system works interactively in cooperation with user, which marks the beginning of the traffic line. The program gradually detects the remaining parts of the traffic line and creates its vector representation. Initially, a point cloud is projected into a horizontal plane, crating a 2D image that is segmented by a U-Net convolutional neural network. Segmentation marks one traffic line. Segmentation is converted to a polyline, which can be used in a geo-information system. During testing, the U-Net achieved a segmentation accuracy of 98.8\%, a specificity of 99.5\% and a sensitivity of 72.9\%. The estimated polyline reached an average deviation of 1.8cm.

Page generated in 0.24 seconds