21 |
Using Latent Discourse Indicators to identify goodness in online conversationsAyush Jain (6012219) 16 January 2020 (has links)
In this work, we model latent discourse indicators to classify constructive and collaborative conversations online. Such conversations are considered good as they are rich in content and have a sense of direction to resolve an issue, solve a problem or gain new insights and knowledge. These unique discourse indicators are able to characterize flow of information, sentiment and community structure within discussions. We build a deep relational model that captures these complex discourse behaviors as latent variables and make a global prediction about overall conversation based on these higher level discourse behaviors. DRaiL, a Declarative Deep Relational Learning platform built on PyTorch, is used for our task in which relevant discourse behaviors are formulated as discrete latent variables and scored using a deep model. These variables capture the nuances involved in online conversations and provide the information needed for predicting the presence or absence of collaborative and constructive characterization in the entire conversational thread. We show that the joint modeling of such competing latent behaviors results in a performance improvement over the traditional direct classification methods in which all the raw features are just combined together to predict the final decision. The Yahoo News Annotated Comments Corpus is used as a dataset containing discussions on Yahoo news forums and final labels are annotated based on our precise and restricted definitions of positively labeled conversations. We formulated our annotation guidelines based on a sample set of conversations and resolved any conflict in specific annotation by revisiting those examples again.
|
22 |
Co-designing Communication Middleware and Deep Learning Frameworks for High-Performance DNN Training on HPC SystemsAwan, Ammar Ahmad 10 September 2020 (has links)
No description available.
|
23 |
Machine Learning model applied to Reactor Dynamics / Maskininlärningsmodel Tillämpad på Reaktor DynamikNikitopoulos, Dionysios Dimitrios January 2023 (has links)
This project’s idea revolved around utilizing the most recent techniques in MachineLearning, Neural Networks, and Data processing to construct a model to be used asa tool to determine stability during core design work. This goal will be achieved bycollecting distribution profiles describing the core state from different steady statesin five burn-up cycles in a reactor to serve as the dataset for training the model. Anadditional cycle will be reserved as a blind testing dataset for the trained model topredict. The variables that will be the target for the predictions are the decay ratioand the frequency since they describe the core stability.The distribution profiles extracted from the core simulator POLCA7 were subjectedto many different Data processing techniques to isolate the most relevant variablesto stability. The processed input variables were merged with the decay ratio andfrequency for those cases, as calculated with POLCA-T. Two different MachineLearning models, one for each output parameter, were designed with Pytorch toanalyze those labeled datasets. The goal of the project was to predict the outputvariables with an error lower than 0.1 for decay ratio and 0.05 for frequency. Themodels were able to predict the testing data with an RMSE of 0.0767 for decay ratioand 0.0354 for frequency.Finally, the trained models were saved and tasked with predicting the outputparameters for a completely unknown cycle. The RMSE was even better forthe unknown cycle, with 0.0615 for decay ratio and 0.0257 for frequency,respectively. / Idén bakom detta projekt var att använda de senaste teknikerna inom maskininlärning, neurala nätverk och databehandling för att konstruera en modell att använda som ett verktyg för att avgöra stabilitet under härddesignsarbete. Detta mål kommer uppnås genom att samla distribueringsprofiler av härdens tillstånd från olika stabila lägen i fem förbränningscyklar (burn-up cycles) i en reaktor, som tjänar som en datamängd att träna modellen på.En sjätte förbränningscykel användes som en datamängd för ett blindprov som den tränade modellen ska förutse. Variablerna som kommer tjäna som mål för förutsägelserna är sönderfallsförhållandet (decay ratio) och frekvensen, då dessa beskriver härdens stabilitet. Distribueringsprofilerna som extraherats från härdsimulatorn POLCA7 utsattes för många olika databehandlingstekniker för att isolera de mest relevanta variablerna för stabilitet. De behandlade indatavariablerna blandades med sönderfallsförhållandet och frekvensen för dessa fall, som beräknats med POLCA-T. Två olika maskininlärningsmodeller, en för varje utdataparameter, designades med Pytorch för att analysera dessa märkta datamängder. Projektets mål var att förutse utdatavariablerna med ett fel under 0.1 för sönderfallsförhållandet och 0.05 för frekvensen. Modellerna lyckades förutse testdatan med en RMSE på 0.0767 för sönderfallsförhållande och 0.0354 för frekvensen.Slutligen sparades de tränade modellerna och gavs uppgiften att förutse utdataparametrarna för en komplett okänd cykel. För den okända cykeln var RMSE ännu lägre, med 0.0615 för sönderfallsförhållande och 0.0257 för frekvensen.
|
24 |
Object Based Image Retrieval Using Feature Maps of a YOLOv5 Network / Objektbaserad bildhämtning med hjälp av feature maps från ett YOLOv5-nätverkEssinger, Hugo, Kivelä, Alexander January 2022 (has links)
As Machine Learning (ML) methods have gained traction in recent years, someproblems regarding the construction of such methods have arisen. One such problem isthe collection and labeling of data sets. Specifically when it comes to many applicationsof Computer Vision (CV), one needs a set of images, labeled as either being of someclass or not. Creating such data sets can be very time consuming. This project setsout to tackle this problem by constructing an end-to-end system for searching forobjects in images (i.e. an Object Based Image Retrieval (OBIR) method) using an objectdetection framework (You Only Look Once (YOLO) [16]). The goal of the project wasto create a method that; given an image of an object of interest q, search for that sameor similar objects in a set of other images S. The core concept of the idea is to passthe image q through an object detection model (in this case YOLOv5 [16]), create a”fingerprint” (can be seen as a sort of identity for an object) from a set of feature mapsextracted from the YOLOv5 [16] model and look for corresponding similar parts of aset of feature maps extracted from other images. An investigation regarding whichvalues to select for a few different parameters was conducted, including a comparisonof performance for a couple of different similarity metrics. In the table below,the parameter combination which resulted in the highest F_Top_300-score (a measureindicating the amount of relevant images retrieved among the top 300 recommendedimages) in the parameter selection phase is presented. Layer: 23Pool Methd: maxSim. Mtrc: eucFP Kern. Sz: 4 Evaluation of the method resulted in F_Top_300-scores as can be seen in the table below. Mouse: 0.820Duck: 0.640Coin: 0.770Jet ski: 0.443Handgun: 0.807Average: 0.696 / Medan ML-metoder har blivit mer populära under senare år har det uppstått endel problem gällande konstruktionen av sådana metoder. Ett sådant problem ärinsamling och annotering av data. Mer specifikt när det kommer till många metoderför datorseende behövs ett set av bilder, annoterande att antingen vara eller inte varaav en särskild klass. Att skapa sådana dataset kan vara väldigt tidskonsumerande.Metoden som konstruerades för detta projekt avser att bekämpa detta problem genomatt konstruera ett end-to-end-system för att söka efter objekt i bilder (alltså en OBIR-metod) med hjälp av en objektdetekteringsalgoritm (YOLO). Målet med projektet varatt skapa en metod som; givet en bild q av ett objekt, söka efter samma eller liknandeobjekt i ett bibliotek av bilder S. Huvudkonceptet bakom idén är att köra bilden qgenom objektdetekteringsmodellen (i detta fall YOLOv5 [16]), skapa ett ”fingerprint”(kan ses som en sorts identitet för ett objekt) från en samling feature maps extraheradefrån YOLOv5-modellen [16] och leta efter liknande delar av samlingar feature maps iandra bilder. En utredning angående vilka värden som skulle användas för ett antalolika parametrar utfördes, inklusive en jämförelse av prestandan som resultat av olikalikhetsmått. I tabellen nedan visas den parameterkombination som gav högst F_Top_300(ett mått som indikerar andelen relevanta bilder bland de 300 högst rekommenderadebilderna). Layer: 23Pool Methd: maxSim. Mtrc: eucFP Kern. Sz: 4 Evaluering av metoden med parameterval enligt tabellen ovan resulterade i F_Top_300enligt tabellen nedan. Mouse: 0.820Duck: 0.640Coin: 0.770Jet ski: 0.443Handgun: 0.807Average: 0.696
|
25 |
Detekce začátku a konce komplexu QRS s využitím hlubokého učení / Deep learning based QRS delineatorMalina, Ondřej January 2021 (has links)
This thesis deals with the issue of automatic measurement of the duration of QRS complexes in ECG signals. Special emphasis is then placed on the possibility of automatic detection of QRS complexes while exciting cardiac tissue with a pacemaker. The content of this work is divided into four logical units, where the first part deals with the heart as an organ. It describes the origin and spread of excitement in the heart, its possible pathologies and their manifestations in ECG recording, it also deals with pacing and measuring ECG recording during simultaneous pacing. The second part of the thesis contains a brief introduction to the topic of machine and deep learning. The third part of the thesis contains a search of current approaches using methods based on deep learning to solve the detection of QRSd. The fourth part deals with the design and implementation of its own model of deep learning, able to detect the beginnings and ends of QRS complexes from ECG recordings. It describes the data preprocessing implemented in the MATLAB programming environment. The actual implementation of the model was performed in the Python using the PyTorch and NumPy moduls.
|
26 |
AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational ResourcesKalgaonkar, Priyank B. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Research work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.
|
27 |
Počítačové vidění a detekce gest rukou a prstů / Computer vision and hand gestures detection and fingers trackingBravenec, Tomáš January 2019 (has links)
Diplomová práce je zaměřena na detekci a rozpoznání gest rukou a prstů ve statických obrazech i video sekvencích. Práce obsahuje shrnutí několika různých přístupů k samotné detekci a také jejich výhody i nevýhody. V práci je též obsažena realizace multiplatformní aplikace napsané v Pythonu s použitím knihoven OpenCV a PyTorch, která dokáže zobrazit vybraný obraz nebo přehrát video se zvýrazněním rozpoznaných gest.
|
28 |
Detekce osob a hodnocení jejich pohlaví a věku v obrazových datech / Detection of persons and evaluation of gender and age in image dataDobiš, Lukáš January 2020 (has links)
Táto diplomová práca sa venuje automatickému rozpoznávaniu ludí v obrazových dátach s využitím konvolučných neurónových sieti na určenie polohy tváre a následnej analýze získaných dát. Výsledkom analýzy tváre je určenie pohlavia, emócie a veku osoby. Práca obsahuje popis použitých architektúr konvolučných sietí pre každú podúlohu. Sieť na odhad veku má natrénované nové váhy, ktoré sú vzápätí zmrazené a majú do svojej architektúry vložené LSTM vrstvy. Tieto vrstvy sú samostatne dotrénované a testované na novom datasete vytvorenom pre tento účel. Výsledky testov ukazujú zlepšenie predikcie veku. Riešenie pre rýchlu, robustnú a modulárnu detekciu tváre a ďalších ludských rysov z jedného obrazu alebo videa je prezentované ako kombinácia prepojených konvolučných sietí. Tieto sú implementované v podobe skriptu a následne vysvetlené. Ich rýchlosť je dostatočná pre ďalšie dodatočné analýzy tváre na živých obrazových dátach.
|
29 |
Detekce začátku a konce komplexu QRS s využitím hlubokého učení / Deep learning based QRS delineatorMalina, Ondřej January 2021 (has links)
This thesis deals with the issue of automatic measurement of the duration of QRS complexes in ECG signals. Special emphasis is then placed on the possibility of automatic detection of QRS complexes while exciting cardiac tissue with a pacemaker. The content of this work is divided into four logical units, where the first part deals with the heart as an organ. It describes the origin and spread of excitement in the heart, its possible pathologies and their manifestations in ECG recording, it also deals with pacing and measuring ECG recording during simultaneous pacing. The second part of the thesis contains a brief introduction to the topic of machine and deep learning. The third part of the thesis contains a search of current approaches using methods based on deep learning to solve the detection of QRSd. The fourth part deals with the design and implementation of its own model of deep learning, able to detect the beginnings and ends of QRS complexes from ECG recordings. It describes the data preprocessing implemented in the MATLAB programming environment. The actual implementation of the model was performed in the Python using the PyTorch and NumPy moduls.
|
30 |
Optimal Q-Space Sampling Scheme : Using Gaussian Process Regression and Mutual InformationHassler, Ture, Berntsson, Jonathan January 2022 (has links)
Diffusion spectrum imaging is a type of diffusion magnetic resonance imaging, capable of capturing very complex tissue structures, but requiring a very large amount of samples in q-space and therefore time. The purpose of this project was to create and evaluate a new sampling scheme in q-space for diffusion MRI, trying to recreate the ensemble averaged propagator (EAP) with fewer samples without significant loss of quality. The sampling scheme was created by greedily selecting the measurements contributing with the most mutual information. The EAP was then recreated using the sampling scheme and interpolation. The mutual information was approximated using the kernel from a Gaussian process machine learning model. The project showed limited but promising results on synthetic data, but was highly restricted by the amount of available computational power. Having to resolve to using a lower resolution mesh when calculating the optimal sampling scheme significantly reduced the overall performance.
|
Page generated in 0.0468 seconds