• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 158
  • 109
  • 71
  • 71
  • 65
  • 63
  • 49
  • 45
  • 44
  • 41
  • 41
  • 37
  • 35
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

HIGH-PERFORMANCE COMPUTING MODEL FOR A BIO-FUEL COMBUSTION PREDICTION WITH ARTIFICIAL INTELLIGENCE

Veeraraghava Raju Hasti (8083571) 06 December 2019 (has links)
<p>The main accomplishments of this research are </p> <p>(1) developed a high fidelity computational methodology based on large eddy simulation to capture lean blowout (LBO) behaviors of different fuels; </p> <p>(2) developed fundamental insights into the combustion processes leading to the flame blowout and fuel composition effects on the lean blowout limits; </p> <p>(3) developed artificial intelligence-based models for early detection of the onset of the lean blowout in a realistic complex combustor. </p> <p>The methodologies are demonstrated by performing the lean blowout (LBO) calculations and statistical analysis for a conventional (A-2) and an alternative bio-jet fuel (C-1).</p> <p>High-performance computing methodology is developed based on the large eddy simulation (LES) turbulence models, detailed chemistry and flamelet based combustion models. This methodology is employed for predicting the combustion characteristics of the conventional fuels and bio-derived alternative jet fuels in a realistic gas turbine engine. The uniqueness of this methodology is the inclusion of as-it-is combustor hardware details such as complex hybrid-airblast fuel injector, thousands of tiny effusion holes, primary and secondary dilution holes on the liners, and the use of highly automated on the fly meshing with adaptive mesh refinement. The flow split and mesh sensitivity study are performed under non-reacting conditions. The reacting LES simulations are performed with two combustion models (finite rate chemistry and flamelet generated manifold models) and four different chemical kinetic mechanisms. The reacting spray characteristics and flame shape are compared with the experiment at the near lean blowout stable condition for both the combustion models. The LES simulations are performed by a gradual reduction in the fuel flow rate in a stepwise manner until a lean blowout is reached. The computational methodology has predicted the fuel sensitivity to lean blowout accurately with correct trends between the conventional and alternative bio-jet fuels. The flamelet generated manifold (FGM) model showed 60% reduction in the computational time compared to the finite rate chemistry model. </p> <p>The statistical analyses of the results from the high fidelity LES simulations are performed to gain fundamental insights into the LBO process and identify the key markers to predict the incipient LBO condition in swirl-stabilized spray combustion. The bio-jet fuel (C-1) exhibits significantly larger CH<sub>2</sub>O concentrations in the fuel-rich regions compared to the conventional petroleum fuel (A-2) at the same equivalence ratio. It is observed from the analysis that the concentration of formaldehyde increases significantly in the primary zone indicating partial oxidation as we approach the LBO limit. The analysis also showed that the temperature of the recirculating hot gases is also an important parameter for maintaining a stable flame. If this temperature falls below a certain threshold value for a given fuel, the evaporation rates and heat release rated decreases significantly and consequently leading to the global extinction phenomena called lean blowout. The present study established the minimum recirculating gas temperature needed to maintain a stable flame for the A-2 and C-1 fuels. </p> The artificial intelligence (AI) models are developed based on high fidelity LES data for early identification of the incipient LBO condition in a realistic gas turbine combustor under engine relevant conditions. The first approach is based on the sensor-based monitoring at the optimal probe locations within a realistic gas turbine engine combustor for quantities of interest using the Support Vector Machine (SVM). Optimal sensor locations are found to be in the flame root region and were effective in detecting the onset of LBO ~20ms ahead of the event. The second approach is based on the spatiotemporal features in the primary zone of the combustor. A convolutional autoencoder is trained for feature extraction from the mass fraction of the OH ( data for all time-steps resulting in significant dimensionality reduction. The extracted features along with the ground truth labels are used to train the support vector machine (SVM) model for binary classification. The LBO indicator is defined as the output of the SVM model, 1 for unstable and 0 for stable. The LBO indicator stabilized to the value of 1 approximately 30 ms before complete blowout.
152

An Intelligent UAV Platform For Multi-Agent Systems

Taashi Kapoor (12437445) 21 April 2022 (has links)
<p> This thesis presents work and simulations containing the use of Artificial Intelligence for real-time perception and real-time anomaly detection using the computer and sensors onboard an Unmanned Aerial Vehicle. One goal of this research is to develop a highly accurate, high-performance computer vision system that can then be used as a framework for object detection, obstacle avoidance, motion estimation, 3D reconstruction, and vision-based GPS denied path planning. The method developed and presented in this paper integrates software and hardware techniques to reach optimal performance for real-time operations. </p> <p>This thesis also presents a solution to real-time anomaly detection using neural networks to further the safety and reliability of operations for the UAV. Real-time telemetry data from different sensors are used to predict failures before they occur. Both these systems together form the framework behind the Intelligent UAV platform, which can be rapidly adopted for different varieties of use cases because of its modular nature and on-board suite of sensors. </p>
153

Prediction of Protein-Protein Interactions Using Deep Learning Techniques

Soleymani, Farzan 24 April 2023 (has links)
Proteins are considered the primary actors in living organisms. Proteins mainly perform their functions by interacting with other proteins. Protein-protein interactions underpin various biological activities such as metabolic cycles, signal transduction, and immune response. PPI identification has been addressed by various experimental methods such as the yeast two-hybrid, mass spectrometry, and protein microarrays, to mention a few. However, due to the sheer number of proteins, experimental methods for finding interacting and non-interacting protein pairs are time-consuming and costly. Therefore a sequence-based framework called ProtInteract is developed to predict protein-protein interaction. ProtInteract comprises two components: first, a novel autoencoder architecture that encodes each protein's primary structure to a lower-dimensional vector while preserving its underlying sequential pattern by extracting uncorrelated attributes and more expressive descriptors. This leads to faster training of the second network, a deep convolutional neural network (CNN) that receives encoded proteins and predicts their interaction. Three different scenarios formulate the prediction task. In each scenario, the deep CNN predicts the class of a given encoded protein pair. Each class indicates different ranges of confidence scores corresponding to the probability of whether a predicted interaction occurs or not. The proposed framework features significantly low computational complexity and relatively fast response. The present study makes two significant contributions to the field of protein-protein interaction (PPI) prediction. Firstly, it addresses the computational challenges posed by the high dimensionality of protein datasets through the use of dimensionality reduction techniques, which extract highly informative sequence attributes. Secondly, the proposed framework, ProtInteract, utilises this information to identify the interaction characteristics of a protein based on its amino acid configuration. ProtInteract encodes the protein's primary structure into a lower-dimensional vector space, thereby reducing the computational complexity of PPI prediction. Our results provide evidence of the proposed framework's accuracy and efficiency in predicting protein-protein interactions.
154

Segmentation and Depth Estimation of Urban Road Using Monocular Camera and Convolutional Neural Networks / Segmentering och djupskatting av stadsväg med monokulär kamera

Djikic, Addi January 2018 (has links)
Deep learning for safe autonomous transport is rapidly emerging. Fast and robust perception for autonomous vehicles will be crucial for future navigation in urban areas with high traffic and human interplay. Previous work focuses on extracting full image depth maps, or finding specific road features such as lanes. However, in urban environments lanes are not always present, and sensors such as LiDAR with 3D point clouds provide a quite sparse depth perception of road with demanding algorithmic approaches. In this thesis we derive a novel convolutional neural network that we call AutoNet. It is designed as an encoder-decoder network for pixel-wise depth estimation of an urban drivable free-space road, using only a monocular camera, and handled as a supervised regression problem. AutoNet is also constructed as a classification network to solely classify and segment the drivable free-space in real- time with monocular vision, handled as a supervised classification problem, which shows to be a simpler and more robust solution than the regression approach. We also implement the state of the art neural network ENet for comparison, which is designed for fast real-time semantic segmentation and fast inference speed. The evaluation shows that AutoNet outperforms ENet for every performance metrics, but shows to be slower in terms of frame rate. However, optimization techniques are proposed for future work, on how to advance the frame rate of the network while still maintaining the robustness and performance. All the training and evaluation is done on the Cityscapes dataset. New ground truth labels for road depth perception are created for training with a novel approach of fusing pre-computed depth maps with semantic labels. Data collection with a Scania vehicle is conducted, mounted with a monocular camera to test the final derived models. The proposed AutoNet shows promising state of the art performance in regards to road depth estimation as well as road classification. / Deep learning för säkra autonoma transportsystem framträder mer och mer inom forskning och utveckling. Snabb och robust uppfattning om miljön för autonoma fordon kommer att vara avgörande för framtida navigering inom stadsområden med stor trafiksampel. I denna avhandling härleder vi en ny form av ett neuralt nätverk som vi kallar AutoNet. Där nätverket är designat som en autoencoder för pixelvis djupskattning av den fria körbara vägytan för stadsområden, där nätverket endast använder sig av en monokulär kamera och dess bilder. Det föreslagna nätverket för djupskattning hanteras som ett regressions problem. AutoNet är även konstruerad som ett klassificeringsnätverk som endast ska klassificera och segmentera den körbara vägytan i realtid med monokulärt seende. Där detta är hanterat som ett övervakande klassificerings problem, som även visar sig vara en mer simpel och mer robust lösning för att hitta vägyta i stadsområden. Vi implementerar även ett av de främsta neurala nätverken ENet för jämförelse. ENet är utformat för snabb semantisk segmentering i realtid, med hög prediktions- hastighet. Evalueringen av nätverken visar att AutoNet utklassar ENet i varje prestandamätning för noggrannhet, men visar sig vara långsammare med avseende på antal bilder per sekund. Olika optimeringslösningar föreslås för framtida arbete, för hur man ökar nätverk-modelens bildhastighet samtidigt som man behåller robustheten.All träning och utvärdering görs på Cityscapes dataset. Ny data för träning samt evaluering för djupskattningen för väg skapas med ett nytt tillvägagångssätt, genom att kombinera förberäknade djupkartor med semantiska etiketter för väg. Datainsamling med ett Scania-fordon utförs även, monterad med en monoculär kamera för att testa den slutgiltiga härleda modellen. Det föreslagna nätverket AutoNet visar sig vara en lovande topp-presterande modell i fråga om djupuppskattning för väg samt vägklassificering för stadsområden.
155

Unsupervised Detection of Interictal Epileptiform Discharges in Routine Scalp EEG : Machine Learning Assisted Epilepsy Diagnosis

Shao, Shuai January 2023 (has links)
Epilepsy affects more than 50 million people and is one of the most prevalent neurological disorders and has a high impact on the quality of life of those suffering from it. However, 70% of epilepsy patients can live seizure free with proper diagnosis and treatment. Patients are evaluated using scalp EEG recordings which is cheap and non-invasive. Diagnostic yield is however low and qualified personnel need to process large amounts of data in order to accurately assess patients. MindReader is an unsupervised classifier which detects spectral anomalies and generates a hypothesis of the underlying patient state over time. The aim is to highlight abnormal, potentially epileptiform states, which could expedite analysis of patients and let qualified personnel attest the results. It was used to evaluate 95 scalp EEG recordings from healthy adults and adult patients with epilepsy. Interictal Epileptiform discharges (IED) occurring in the samples had been retroactively annotated, along with the patient state and maneuvers performed by personnel, to enable characterization of the classifier’s detection performance. The performance was slightly worse than previous benchmarks on pediatric scalp EEG recordings, with a 7% and 33% drop in specificity and sensitivity, respectively. Electrode positioning and partial spatial extent of events saw notable impact on performance. However, no correlation between annotated disturbances and reduction in performance could be found. Additional explorative analysis was performed on serialized intermediate data to evaluate the analysis design. Hyperparameters and electrode montage options were exposed to optimize for the average Mathew’s correlation coefficient (MCC) per electrode per patient, on a subset of the patients with epilepsy. An increased window length and lowered amount of training along with an common average montage proved most successful. The Euclidean distance of cumulative spectra (ECS), a metric suitable for spectral analysis, and homologous L2 and L1 loss function were implemented, of which the ECS further improved the average performance for all samples. Four additional analyses, featuring new time-frequency transforms and multichannel convolutional autoencoders were evaluated and an analysis using the continuous wavelet transform (CWT) and a convolutional autoencoder (CNN) performed the best, with an average MCC score of 0.19 and 56.9% sensitivity with approximately 13.9 false positives per minute.
156

La découverte de nouvelle physique à l'aide de la détection d'anomalies avec l'apprentissage automatique au Grand collisionneur de hadrons

Leissner-Martin, Julien 12 1900 (has links)
La physique des particules est une branche de la science qui est actuellement régie sous un ensemble de lois nommé le \textit{modèle standard} (MS). Il dicte notamment quelles particules existent et comment elles interagissent entre elles. Il permet de prédire toutes sortes de résultats qui sont constamment testés et confirmés par une multitude d'expériences, dont l'expérience ATLAS, au Grand Collisionneur de Hadrons (LHC). Toutefois, ce modèle hautement précis et juste ne peut décrire qu'environ 5\% de la matière de l'Univers et s'avère donc incomplet. Les scientifiques passent au peigne fin plusieurs études pour y retrouver de la nouvelle physique, mais en vain. \\ Les théoriciens ne sont pas en reste non plus, et ont concocté plusieurs théories pouvant être vues comme des extensions du modèle standard. Malheureusement, plus de dix ans après la découverte du boson de Higgs au LHC qui venait confirmer la théorie actuelle du MS, aucun signe de ces extensions n'a pu être trouvé. Nous proposons dans ce mémoire d'utiliser l'intelligence artificielle pour aider à trouver certains indices de nouvelle physique. \\ Pour ce faire, nous entraînerons des modèles d'apprentissage automatique \textit{(machine learning)} à reconnaître des signes de la nouvelle physique dans des données réelles ou simulées issues de collisions proton-proton au sein du détecteur ATLAS. Ce détecteur oeuvre au LHC, le plus grand collisionneur au monde, où nos données proviennent d'énergies de centre de masse de \mbox{13 TeV.} Nous utiliserons les quadrivecteurs des particules contenues dans les jets boostés à grand rayon, des amas collimatés de particules présents dans ATLAS, qui pourraient contenir cette fameuse nouvelle physique. Dans ce mémoire, nous tenterons entre autres de retrouver des signaux de quarks top ainsi que de particules hypothétiques issues d'un modèle avec un secteur étendu du boson de Higgs. \\ Actuellement, nos modèles sont capables de bien distinguer le signal du bruit de fond. Toutefois, les résultats sont corrélés à la masse des jets et toute tentative pour contrecarrer cette corrélation diminue de beaucoup la discrimination du signal et du bruit de fond. De plus, nous devrons améliorer le rejet du bruit de fond pour espérer retrouver de la nouvelle physique dans les données d'ATLAS. \\ \textbf{Mots-clés : physique des particules, LHC, Grand collisionneur de hadrons, ATLAS, CERN, intelligence artificielle, apprentissage automatique, réseau de neurones, auto-encodeur variationnel, anomalies, jet boosté, jet à grand rayon} / Particle physics is currently governed by a set of laws called the Standard Model. This model notably includes which particles exist and how they interact with one another. It also allows the prediction of many results which are constantly tested and confirmed by all kinds of experiments, like the ATLAS experiment at the Large Hadron Collider (LHC). However, this highly precise model can only describe 5\% of the Universe, so it is incomplete. Scientists across the globe analyzed all kinds of data to find new physics, but to no avail. \\ Theorists also aren't resting, and have concocted many new theories which can be seen as Standard Model extensions. Unfortunately, more than ten years after the discovery of the Higgs boson at LHC that confirmed the last bits of the Standard Model, no signs of these extensions have been found. In light of this, we propose to use artificial intelligence to help us find signs of new physics. \\ To perform this task, we will train machine learning models to recognize signs of new physics inside real or simulated data originating from proton-proton collisions in the ATLAS detector. This detector operates at LHC, the biggest particle collider in the world, where our data will come from center-of-mass energies of \mbox{13 TeV.} We will use four-vectors of particles contained within large radius and boosted jets, which are dense streams of particles present in ATLAS and where new physics might hide. In this dissertation, we will notably try to find signals of top quarks and hypothetical particles originating from a model with an extended Higgs boson sector. \\ Currently, our models are able to distinguish between signal and background noise. However, these results are heavily correlated to jet mass, and any attempt at diminishing this correlation yields worse discriminating power between signal and background. We will also need to improve the background rejection to hope find new physics in the ATLAS data. \\ \textbf{Keywords : particle physics, LHC, ATLAS, CERN, artificial intelligence, deep learning, neural network, variational autoencoder, anomaly, boosted jet, large radius jet}
157

A deep learning based anomaly detection pipeline for battery fleets

Khongbantabam, Nabakumar Singh January 2021 (has links)
This thesis proposes a deep learning anomaly detection pipeline to detect possible anomalies during the operation of a fleet of batteries and presents its development and evaluation. The pipeline employs sensors that connect to each battery in the fleet to remotely collect real-time measurements of their operating characteristics, such as voltage, current, and temperature. The deep learning based time-series anomaly detection model was developed using Variational Autoencoder (VAE) architecture that utilizes either Long Short-Term Memory (LSTM) or, its cousin, Gated Recurrent Unit (GRU) as the encoder and the decoder networks (LSTMVAE and GRUVAE). Both variants were evaluated against three well-known conventional anomaly detection algorithms Isolation Nearest Neighbour (iNNE), Isolation Forest (iForest), and kth Nearest Neighbour (k-NN) algorithms. All five models were trained using two variations in the training dataset (full-year dataset and partial recent dataset), producing a total of 10 different model variants. The models were trained using the unsupervised method and the results were evaluated using a test dataset consisting of a few known anomaly days in the past operation of the customer’s battery fleet. The results demonstrated that k-NN and GRUVAE performed close to each other, outperforming the rest of the models with a notable margin. LSTMVAE and iForest performed moderately, while the iNNE and iForest variant trained with the full dataset, performed the worst in the evaluation. A general observation also reveals that limiting the training dataset to only a recent period produces better results nearly consistently across all models. / Detta examensarbete föreslår en pipeline för djupinlärning av avvikelser för att upptäcka möjliga anomalier under driften av en flotta av batterier och presenterar dess utveckling och utvärdering. Rörledningen använder sensorer som ansluter till varje batteri i flottan för att på distans samla in realtidsmätningar av deras driftsegenskaper, såsom spänning, ström och temperatur. Den djupinlärningsbaserade tidsserieanomalidetekteringsmodellen utvecklades med VAE-arkitektur som använder antingen LSTM eller, dess kusin, GRU som kodare och avkodarnätverk (LSTMVAE och GRU) VAE). Båda varianterna utvärderades mot tre välkända konventionella anomalidetekteringsalgoritmer -iNNE, iForest och k-NN algoritmer. Alla fem modellerna tränades med hjälp av två varianter av träningsdatauppsättningen (helårsdatauppsättning och delvis färsk datauppsättning), vilket producerade totalt 10 olika modellvarianter. Modellerna tränades med den oövervakade metoden och resultaten utvärderades med hjälp av en testdatauppsättning bestående av några kända anomalidagar under tidigare drift av kundens batteriflotta. Resultaten visade att k-NN och GRUVAE presterade nära varandra och överträffade resten av modellerna med en anmärkningsvärd marginal. LSTMVAE och iForest presterade måttligt, medan varianten iNNE och iForest tränade med hela datasetet presterade sämst i utvärderingen. En allmän observation avslöjar också att en begränsning av träningsdatauppsättningen till endast en ny period ger bättre resultat nästan konsekvent över alla modeller.
158

Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries / Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries

Teng, Sin Yong January 2020 (has links)
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.

Page generated in 0.0864 seconds