Spelling suggestions: "subject:"[een] CNN"" "subject:"[enn] CNN""
431 |
Ghosts of Our Past: Neutrino Direction Reconstruction Using Deep Neural NetworksStjärnholm, Sigfrid January 2021 (has links)
Neutrinos are the perfect cosmic messengers when it comes to investigating the most violent and mysterious astronomical and cosmological events in the Universe. The interaction probability of neutrinos is small, and the flux of high-energy neutrinos decreases quickly with increasing energy. In order to find high-energy neutrinos, large bodies of matter needs to be instrumented. A proposed detector station design called ARIANNA is designed to detect neutrino interactions in the Antarctic ice by measuring radio waves that are created due to the Askaryan effect. In this paper, we present a method based on state-of-the-art machine learning techniques to reconstruct the direction of the incoming neutrino, based on the radio emission that it produces. We trained a neural network with simulated data, created with the NuRadioMC framework, and optimized it to make the best possible predictions. The number of training events used was on the order of 106. Using two different emission models, we found that the network was able to learn and generalize on the neutrino events with good precision, resulting in a resolution of 4-5°. The model could also make good predictions on a dataset even if it was trained with another emission model. The results produced are promising, especially due to the fact that classical techniques have not been able to reproduce the same results without having prior knowledge of where the neutrino interaction took place. The developed neural network can also be used to assess the performance of other proposed detector designs, to quickly and reliably give an indication of which design might yield the most amount of value to the scientific community. / Neutriner är de perfekta kosmiska budbärarna när det kommer till att undersöka de mest våldsamma och mystiska astronomiska och kosmologiska händelserna i vårt universum. Sannolikheten för en neutrinointeraktion är dock liten, och flödet av högenergetiska neutriner minskar kraftigt med energin. För att hitta dessa högenergetiska neutriner måste stora volymer av materia instrumenteras. Ett förslag på en design för en detektorstation kallas ARIANNA, och är framtagen för att detektera neutrinointeraktioner i den antarktiska isen genom att mäta radiopulser som bildas på grund av Askaryan-effekten. I denna rapport presenterar vi en metod baserad på toppmoderna maskininlärningstekniker för att rekonstruera riktningen på en inkommande neutrino, utifrån den radiostrålning som produceras. Vi tränade ett neuralt nätverk med simulerade data, som skapades med hjälp av ramverket NuRadioMC, och optimerade nätverket för att göra så bra förutsägelser som möjligt. Antalet interaktionshändelser som användes för att träna nätverket var i storleksordningen 106. Genom att undersöka två olika emissionsmodeller fann vi att nätverket kunde generalisera med god precision. Detta resulterade i en upplösning på 4-5°. Modellen kunde även göra goda förutsägelser på en datamängd trots att nätverket var tränat med en annan emissionsmodell. De resultat som metoden framtog är lovande, särskilt med avseende på att tidigare klassiska metoder inte har lyckats reproducera samma resultat utan att metoden redan innan vet var i isen som neutrinointeraktionen skedde. Nätverket kan också komma att användas för att utvärdera prestandan hos andra designförslag på detektorstationer för att snabbt och säkert ge en indikation på vilken design som kan tillhandahålla mest vetenskapligt värde.
|
432 |
Cooperative security log analysis using machine learning : Analyzing different approaches to log featurization and classification / Kooperativ säkerhetslogganalys med maskininlärningMalmfors, Fredrik January 2022 (has links)
This thesis evaluates the performance of different machine learning approaches to log classification based on a dataset derived from simulating intrusive behavior towards an enterprise web application. The first experiment consists of performing attacks towards the web app in correlation with the logs to create a labeled dataset. The second experiment consists of one unsupervised model based on a variational autoencoder and four super- vised models based on both conventional feature-engineering techniques with deep neural networks and embedding-based feature techniques followed by long-short-term memory architectures and convolutional neural networks. With this dataset, the embedding-based approaches performed much better than the conventional one. The autoencoder did not perform well compared to the supervised models. To conclude, embedding-based ap- proaches show promise even on datasets with different characteristics compared to natural language.
|
433 |
Deep Learning Models for Human Activity RecognitionAlbert Florea, George, Weilid, Filip January 2019 (has links)
AMI Meeting Corpus (AMI) -databasen används för att undersöka igenkännande av gruppaktivitet. AMI Meeting Corpus (AMI) -databasen ger forskare fjärrstyrda möten och naturliga möten i en kontorsmiljö; mötescenario i ett fyra personers stort kontorsrum. För attuppnågruppaktivitetsigenkänninganvändesbildsekvenserfrånvideosoch2-dimensionella audiospektrogram från AMI-databasen. Bildsekvenserna är RGB-färgade bilder och ljudspektrogram har en färgkanal. Bildsekvenserna producerades i batcher så att temporala funktioner kunde utvärderas tillsammans med ljudspektrogrammen. Det har visats att inkludering av temporala funktioner både under modellträning och sedan förutsäga beteende hos en aktivitet ökar valideringsnoggrannheten jämfört med modeller som endast använder rumsfunktioner[1]. Deep learning arkitekturer har implementerats för att känna igen olika mänskliga aktiviteter i AMI-kontorsmiljön med hjälp av extraherade data från the AMI-databas.Neurala nätverks modellerna byggdes med hjälp av KerasAPI tillsammans med TensorFlow biblioteket. Det finns olika typer av neurala nätverksarkitekturer. Arkitekturerna som undersöktes i detta projektet var Residual Neural Network, Visual GeometryGroup 16, Inception V3 och RCNN (LSTM). ImageNet-vikter har använts för att initialisera vikterna för Neurala nätverk basmodeller. ImageNet-vikterna tillhandahålls av Keras API och är optimerade för varje basmodell [2]. Basmodellerna använder ImageNet-vikter när de extraherar funktioner från inmatningsdata. Funktionsextraktionen med hjälp av ImageNet-vikter eller slumpmässiga vikter tillsammans med basmodellerna visade lovande resultat. Både Deep Learning användningen av täta skikt och LSTM spatio-temporala sekvens predikering implementerades framgångsrikt. / The Augmented Multi-party Interaction(AMI) Meeting Corpus database is used to investigate group activity recognition in an office environment. The AMI Meeting Corpus database provides researchers with remote controlled meetings and natural meetings in an office environment; meeting scenario in a four person sized office room. To achieve the group activity recognition video frames and 2-dimensional audio spectrograms were extracted from the AMI database. The video frames were RGB colored images and audio spectrograms had one color channel. The video frames were produced in batches so that temporal features could be evaluated together with the audio spectrogrames. It has been shown that including temporal features both during model training and then predicting the behavior of an activity increases the validation accuracy compared to models that only use spatial features [1]. Deep learning architectures have been implemented to recognize different human activities in the AMI office environment using the extracted data from the AMI database.The Neural Network models were built using the Keras API together with TensorFlow library. There are different types of Neural Network architectures. The architecture types that were investigated in this project were Residual Neural Network, Visual Geometry Group 16, Inception V3 and RCNN(Recurrent Neural Network). ImageNet weights have been used to initialize the weights for the Neural Network base models. ImageNet weights were provided by Keras API and was optimized for each base model[2]. The base models uses ImageNet weights when extracting features from the input data.The feature extraction using ImageNet weights or random weights together with the base models showed promising results. Both the Deep Learning using dense layers and the LSTM spatio-temporal sequence prediction were implemented successfully.
|
434 |
Effektivisering av automatiserad igenkänning av registreringsskyltar med hjälp av artificiella neurala nätverk för användning inom smarta hemDrottsgård, Alexander, Andreassen, Jens January 2019 (has links)
Konceptet automatiserad igenkänning och avläsning av registreringsskyltarhar utvecklats mycket de senaste åren och användningen av Artificiellaneurala nätverk har introducerats i liten skala med lovande resultat. Viundersökte möjligheten att använda detta i ett automatiserat system förgarageportar och implementerade en prototyp för testning. Den traditionellaprocessen för att läsa av en skylt kräver flera steg, i vissa fall upp till fem.Dessa steg ger alla en felmarginal som aggregerat kan leda till över 30% riskför ett misslyckat resultat. I denna uppsats adresseras detta problem och medhjälp av att använda oss utav Artificiella neurala nätverk utvecklades enkortare process med endast två steg för att läsa en skylt, (1) lokaliseraregistreringsskylten (2) läsa karaktärerna på registreringsskylten. Dettaminskar antalet steg till hälften av den traditionella processen samt minskarrisken för fel med 13%. Vi gjorde en Litteraturstudie för att identifiera detlämpligaste neurala nätverket för uppgiften att lokalisera registreringsskyltarmed vår miljös begränsningar samt möjligheter i åtanke. Detta ledde tillanvändandet av Faster R-CNN, en algoritm som använder ett antal artificiellaneurala nätverk. Vi har använt metoden Design och Creation för att skapa enproof of concept prototyp som använder vårt föreslagna tillvägagångssätt föratt bevisa att det är möjligt att implementera detta i en verklig miljö. / The concept of automated recognition and reading of license plates haveevolved a lot the last years and the use of Artificial neural networks have beenintroduced in a small scale with promising results. We looked into thepossibility of using this in an automated garage port system and weimplemented a prototype for testing. The traditional process for reading alicense plate requires multiple steps, sometimes up to five. These steps all givea margin of error which aggregated sometimes leads to over 30% risk forfailure. In this paper we addressed this issue and with the help of a Artificialneural network. We developed a process with only two steps for the entireprocess of reading a license plate, (1) localize license plate (2) read thecharacters on the plate. This reduced the number of steps to half of theprevious number and also reduced the risk for errors with 13%. We performeda Literature Review to find the best suited algorithm for the task oflocalization of the license plate in our specific environment. We found FasterR-CNN, a algorithm which uses multiple artificial neural networks. We usedthe method Design and Creation to implement a proof of concept prototypeusing our approach which proved that this is possible to do in a realenvironment.
|
435 |
3D Object Detection Using Virtual Environment Assisted Deep Network TrainingDale, Ashley S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / An RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with a small sample of real-world
image data and used to train the Mask R-CNN (MR-CNN) architecture in a variety
of configurations. When the MR-CNN architecture was initialized with MS COCO
weights and the heads were trained with a mix of synthetic data and real world data,
F1 scores improved in four of the five classes: The average maximum F1-score of
all classes and all epochs for the networks trained with synthetic data is F1∗ = 0.91,
compared to F1 = 0.89 for the networks trained exclusively with real data, and the
standard deviation of the maximum mean F1-score for synthetically trained networks
is σ∗ = 0.015, compared to σ_F1 = 0.020 for the networks trained exclusively with real F1
data. Various backgrounds in synthetic data were shown to have negligible impact on F1 scores, opening the door to abstract backgrounds and minimizing the need for intensive synthetic data fabrication. When the MR-CNN architecture was initialized with MS COCO weights and depth data was included in the training data, the net- work was shown to rely heavily on the initial convolutional input to feed features into the network, the image depth channel was shown to influence mask generation, and the image color channels were shown to influence object classification. A set of latent variables for a subset of the synthetic datatset was generated with a Variational Autoencoder then analyzed using Principle Component Analysis and Uniform Manifold Projection and Approximation (UMAP). The UMAP analysis showed no meaningful distinction between real-world and synthetic data, and a small bias towards clustering based on image background.
|
436 |
Cost-Aware Machine Learning and Deep Learning for Extremely Imbalanced DataAhmed, Jishan 11 August 2023 (has links)
No description available.
|
437 |
Is eXplainable AI suitable as a hypotheses generating tool for medical research? Comparing basic pathology annotation with heat maps to find outAdlersson, Albert January 2023 (has links)
Hypothesis testing has long been a formal and standardized process. Hypothesis generation, on the other hand, remains largely informal. This thesis assess whether eXplainable AI (XAI) can aid in the standardization of hypothesis generation through its utilization as a hypothesis generating tool for medical research. We produce XAI heat maps for a Convolutional Neural Network (CNN) trained to classify Microsatellite Instability (MSI) in colon and gastric cancer with four different XAI methods: Guided Backpropagation, VarGrad, Grad-CAM and Sobol Attribution. We then compare these heat maps with pathology annotations in order to look for differences to turn into new hypotheses. Our CNN successfully generates non-random XAI heat maps whilst achieving a validation accuracy of 85% and a validation AUC of 93% – as compared to others who achieve a AUC of 87%. Our results conclude that Guided Backpropagation and VarGrad are better at explaining high-level image features whereas Grad-CAM and Sobol Attribution are better at explaining low-level ones. This makes the two groups of XAI methods good complements to each other. Images of Microsatellite Insta- bility (MSI) with high differentiation are more difficult to analyse regardless of which XAI is used, probably due to exhibiting less regularity. Regardless of this drawback, our assessment is that XAI can be used as a useful hypotheses generating tool for research in medicine. Our results indicate that our CNN utilizes the same features as our basic pathology annotations when classifying MSI – with some additional features of basic pathology missing – features which we successfully are able to generate new hypotheses with.
|
438 |
Evaluation of Computer Tomography based Cancer Diagnostics with the help of 3D Printed Phantoms and Deep LearningBack, Alex, Pandurevic, Pontus January 2023 (has links)
Computed x-ray tomography is one of the most common medical imaging modalities andas such ways of improving the images are of high relevance. Applying deep learningmethods to denoise CT images has been of particular interest in recent years. In thisstudy, rather than using traditional denoising metrics such as MSE or PSNR for evaluation, we use a radiomic approach combined with 3D printed phantoms as a "groundtruth" to compare with. Our approach of having a ground truth ensures that we withabsolute certainty can say what a scanned tumor is supposed to look like and compareour results to a true value. This performance metric is better suited for evaluation thanMSE since we want to maintain structures and edges in tumors and MSE-evaluationrewards over-smoothing. Here we apply U-Net networks to images of 3D printed tumors. The 4 tumors and alung phantom were printed with PLA filament and 80% fill rate with a gyroidal patternto mimic soft tissue in a CT-scan while maintaining isotropicity. CT images of the 3Dprinted phantom and tumors were taken with a GE revolution DE scanner at KarolinskaUniversity Hospital. The networks were trained on the 2016 NIH-AAPM-Mayo ClinicLow Dose CT Grand Challenge dataset, mapping Low Dose CT images to Normal DoseCT images using three different loss functions, l1, vgg16, and vgg16_l1. Evaluating the networks on RadiomicsShape features from SlicerRadiomics® we findcompetitive performance with TrueFidelityTM Deep Learning Image Reconstruction (DLIR)by GE HealthCareTM. With one of our networks (UNet_alt, vgg16_l1 loss function with32 features, and batch size 16 in training.) outperforming TrueFidelity in 63% of caseswhen evaluated by counting if a radiomic feature has a lower relative error comparedto ground truth after our own denoising for four different kind of tumors. The samenetwork outperformed FBP in 84% of cases which in combination with the majority ofour networks performing substantially better against FBP than TrueFidelity shows theviability of DLIR compared to older methods such as FBP.
|
439 |
Data Driven Video Source Camera IdentificationHopkins, Nicholas Christian 15 May 2023 (has links)
No description available.
|
440 |
Proposal networks in object detection / Förslagsnätverk för objektdetekteringGrossman, Mikael January 2019 (has links)
Locating and extracting useful data from images is a task that has been revolutionized in the last decade as computing power has risen to such a level to use deep neural networks with success. A type of neural network that uses the convolutional operation called convolutional neural network (CNN) is suited for image related tasks. Using the convolution operation creates opportunities for the network to learn their own filters, that previously had to be hand engineered. For locating objects in an image the state-of-the-art Faster R-CNN model predicts objects in two parts. Firstly, the region proposal network (RPN) extracts regions from the picture where it is likely to find an object. Secondly, a detector verifies the likelihood of an object being in that region.For this thesis, we review the current literature on artificial neural networks, object detection methods, proposal methods and present our new way of generating proposals. By replacing the RPN with our network, the multiscale proposal network (MPN), we increase the average precision (AP) with 12% and reduce the computation time per image by 10%. / Lokalisering av användbar data från bilder är något som har revolutionerats under det senaste decenniet när datorkraften har ökat till en nivå då man kan använda artificiella neurala nätverk i praktiken. En typ av ett neuralt nätverk som använder faltning passar utmärkt till bilder eftersom det ger möjlighet för nätverket att skapa sina egna filter som tidigare skapades för hand. För lokalisering av objekt i bilder används huvudsakligen Faster R-CNN arkitekturen. Den fungerar i två steg, först skapar RPN boxar som innehåller regioner där nätverket tror det är störst sannolikhet att hitta ett objekt. Sedan är det en detektor som verifierar om boxen är på ett objekt .I denna uppsats går vi igenom den nuvarande litteraturen i artificiella neurala nätverk, objektdektektering, förslags metoder och presenterar ett nytt förslag att generera förslag på regioner. Vi visar att genom att byta ut RPN med vår metod (MPN) ökar vi precisionen med 12% och reducerar tiden med 10%.
|
Page generated in 0.0687 seconds