• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 486
  • 247
  • 201
  • 191
  • 163
  • 139
  • 127
  • 112
  • 105
  • 102
  • 90
  • 88
  • 85
  • 83
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Automatic Change Detection in Visual Scenes

Brolin, Morgan January 2021 (has links)
This thesis proposes a Visual Scene Change Detector(VSCD) system which is a system which involves four parts, image retrieval, image registration, image change detection and panorama creation. Two prestudies are conducted in order to find a proposed image registration method and a image retrieval method. The two found methods are then combined with a proposed image registration method and a proposed panorama creation method to form the proposed VSCD. The image retrieval prestudy evaluates a SIFT related method with a bag of words related method and finds the SIFT related method to be the superior method. The image change detection prestudy evaluates 8 different image change detection methods. Result from the image change detection prestudy shows that the methods performance is dependent on the image category and an ensemble method is the least dependent on the category of images. An ensemble method is found to be the best performing method followed by a range filter method and then a Convolutional Neural Network (CNN) method. Using a combination of the 2 image retrieval methods and the 8 image change detection method 16 different VSCD are formed and tested. The final result show that the VSCD comprised of the best methods from the prestudies is the best performing method. / Detta exjobb föreslår ett Visual Scene Change Detector(VSCD) system vilket är ett system som har 4 delar, image retrieval, image registration, image change detection och panorama creation. Två förstudier görs för att hitta en föreslagen image registration metod och en föreslagen panorama creation metod. De två föreslagna delarna kombineras med en föreslagen image registration och en föreslagen panorama creation metod för att utgöra det föreslagna VSCD systemet. Image retrieval förstudien evaluerar en ScaleInvariant Feature Transform (SIFT) relaterad method med en Bag of Words (BoW) relaterad metod och hittar att den SIFT relaterade methoden är bäst. Image change detection förstudie visar att metodernas prestanda är beroende av catagorin av bilder och att en enemble metod är minst beroende av categorin av bilder. Enemble metoden är hittad att vara den bästa presterande metoden följt av en range filter metod och sedan av en CNN metod. Genom att använda de 2 image retrieval metoder kombinerat med de 8 image change detection metoder är 16 st VSCD system skapade och testade. Sista resultatet visar att den VSCD som använder de bästa metoderna från förstudien är den bäst presterande VSCD.
412

Simultaneous Detection and Validation of Multiple Ingredients on Product Packages: An Automated Approach : Using CNN and OCR Techniques / Simultant detektering och validering av flertal ingredienser på produktförpackningar: Ett automatiserat tillvägagångssätt : Genom användning av CNN och OCR tekniker

Farokhynia, Rodbeh, Krikeb, Mokhtar January 2024 (has links)
Manual proofreading of product packaging is a time-consuming and uncertain process that can pose significant challenges for companies, such as scalability issues, compliance risks and high costs. This thesis work introduces a novel solution by employing advanced computer vision and machine learning methods to automate the proofreading of multiple ingredients’ lists corresponding to multiple products simultaneously within a product package. By integrating Convolutional Neural Network (CNN) and Optical Character Recognition (OCR) techniques, this study examines the efficacy of automated proofreading in comparison to manual methods. The thesis involves analyzing product package artwork to identify ingredient lists utilizing the YOLOv5 object detection algorithm and the optical character recognition tool EasyOCR for ingredient extraction. Additionally, Python scripts are employed to extract ingredients from corresponding INCI PDF files (document that lists the standardized names of ingredients used in cosmetic products). A comprehensive comparison is then conducted to evaluate the accuracy and efficiency of automated proofreading. The comparison of the extracted ingredients from the product packages and their corresponding INCI PDF files yielded a match of 12.7%. Despite the suboptimal result, insights from the study highlights the limitations of current detection and recognition algorithms when applied to complex artwork. A few examples of the insights have been that the trained YOLOv5 model cuts through sentences in the ingredient list or that EasyOCR cannot extract ingredients from vertically aligned product package images. The findings underscore the need for advancements in detection algorithms and OCR tools to effectively handle objects like product packaging designs. The study also suggests that companies, such as H&M, consider updating their artwork and INCI PDF files to align with the capabilities of current AI-driven tools. By doing so, they can enhance the efficiency and overall effectiveness of automated proofreading processes, thereby reducing errors and improving accuracy. / Manuell korrekturläsning av produktförpackningar är en tidskrävande och osäker process som kan skapa betydande utmaningar för företag, såsom skalbarhetsproblem, efterlevnadsrisker och höga kostnader. Detta examensarbete presenterar en ny lösning genom att använda avancerade metoder inom datorseende och maskininlärning för att automatisera korrekturläsningen av flera ingredienslistor som motsvarar flera produkter samtidigt inom en produktförpackning. Genom att integrera Convolutional Neural Network (CNN) och Optical Character Recognition (OCR) utreder denna studie effektiviteten av automatiserad korrekturläsning i jämförelse med manuella metoder. Avhandlingen analyserar designen av produktförpackningar för att identifiera ingredienslistor med hjälp av objektdetekteringsalgoritmen YOLOv5 och det optiska teckenigenkänningsverktyget EasyOCR för extrahera enskilda ingredienser från listorna. Utöver detta används Python-skript för att extrahera ingredienser från motsvarande INCI-PDF filer (dokument med standardiserade namn på ingredienser som används i kosmetika produkter). En omfattande jämförelse genomförs sedan för att utvärdera noggrannheten och effektiviteten hos automatiserad korrekturläsning. Jämförelsen av de extraherade ingredienserna från produktförpackningarna och deras korresponderande INCI-PDF filer gav ett matchnings resultat på 12.7%. Trots de mindre optimala resultaten belyser studien de begränsningar som finns hos de nuvarande detekterings- och teckenigenkänningsalgoritmerna när de appliceras på komplexa verk av produktförpackningar. Ett fåtal exempel på insikterna är bland annat att den tränade YOLOv5 modellen skär igenom meningar i ingredienslistan eller att EasyOCR inte kan extrahera ingredienser från stående ingredienslistor på produktförpackningsbilder. Resultaten understryker behovet av framsteg inom detekteringsalgoritmer och OCR-verktyg för att effektivt kunna hantera komplexa objekt som produktförpackningar. Studien föreslår även att företag, såsom H&M, överväger att uppdatera sina design av produktförpackningar och INCI-PDF filer för att anpassa sig till kapaciteten hos aktuella AI-drivna verktyg. Genom att utföra detta kan de förbättra både effektiviteten och den övergripande kvaliteten hos de automatiserade korrekturläsningsprocesserna, vilket minskar fel och ökar noggrannheten.
413

Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis

Pérez Pelegrí, Manuel 27 April 2023 (has links)
[ES] Las enfermedades cardiovasculares son una de las causas más predominantes de muerte y comorbilidad en los países desarrollados, por ello se han realizado grandes inversiones en las últimas décadas para producir herramientas de diagnóstico y aplicaciones de tratamiento de enfermedades cardíacas de alta calidad. Una de las mejores herramientas de diagnóstico para caracterizar el corazón ha sido la imagen por resonancia magnética (IRM) gracias a sus capacidades de alta resolución tanto en la dimensión espacial como temporal, lo que permite generar imágenes dinámicas del corazón para un diagnóstico preciso. Las dimensiones del ventrículo izquierdo y la fracción de eyección derivada de ellos son los predictores más potentes de morbilidad y mortalidad cardiaca y su cuantificación tiene connotaciones importantes para el manejo y tratamiento de los pacientes. De esta forma, la IRM cardiaca es la técnica de imagen más exacta para la valoración del ventrículo izquierdo. Para obtener un diagnóstico preciso y rápido, se necesita un cálculo fiable de biomarcadores basados en imágenes a través de software de procesamiento de imágenes. Hoy en día la mayoría de las herramientas empleadas se basan en sistemas semiautomáticos de Diagnóstico Asistido por Computador (CAD) que requieren que el experto clínico interactúe con él, consumiendo un tiempo valioso de los profesionales cuyo objetivo debería ser únicamente interpretar los resultados. Un cambio de paradigma está comenzando a entrar en el sector médico donde los sistemas CAD completamente automáticos no requieren ningún tipo de interacción con el usuario. Estos sistemas están diseñados para calcular los biomarcadores necesarios para un diagnóstico correcto sin afectar el flujo de trabajo natural del médico y pueden iniciar sus cálculos en el momento en que se guarda una imagen en el sistema de archivo informático del hospital. Los sistemas CAD automáticos, aunque se consideran uno de los grandes avances en el mundo de la radiología, son extremadamente difíciles de desarrollar y dependen de tecnologías basadas en inteligencia artificial (IA) para alcanzar estándares médicos. En este contexto, el aprendizaje profundo (DL) ha surgido en la última década como la tecnología más exitosa para abordar este problema. Más específicamente, las redes neuronales convolucionales (CNN) han sido una de las técnicas más exitosas y estudiadas para el análisis de imágenes, incluidas las imágenes médicas. En este trabajo describimos las principales aplicaciones de CNN para sistemas CAD completamente automáticos para ayudar en la rutina de diagnóstico clínico mediante resonancia magnética cardíaca. El trabajo cubre los puntos principales a tener en cuenta para desarrollar tales sistemas y presenta diferentes resultados de alto impacto dentro del uso de CNN para resonancia magnética cardíaca, separados en tres proyectos diferentes que cubren su aplicación en la rutina clínica de diagnóstico, cubriendo los problemas de la segmentación, estimación automática de biomarcadores con explicabilidad y la detección de eventos. El trabajo completo presentado describe enfoques novedosos y de alto impacto para aplicar CNN al análisis de resonancia magnética cardíaca. El trabajo proporciona varios hallazgos clave, permitiendo varias formas de integración de esta reciente y creciente tecnología en sistemas CAD completamente automáticos que pueden producir resultados altamente precisos, rápidos y confiables. Los resultados descritos mejorarán e impactarán positivamente el flujo de trabajo de los expertos clínicos en un futuro próximo. / [CA] Les malalties cardiovasculars són una de les causes de mort i comorbiditat més predominants als països desenvolupats, s'han fet grans inversions en les últimes dècades per tal de produir eines de diagnòstic d'alta qualitat i aplicacions de tractament de malalties cardíaques. Una de les tècniques millor provades per caracteritzar el cor ha estat la imatge per ressonància magnètica (IRM), gràcies a les seves capacitats d'alta resolució tant en dimensions espacials com temporals, que permeten generar imatges dinàmiques del cor per a un diagnòstic precís. Les dimensions del ventricle esquerre i la fracció d'ejecció que se'n deriva són els predictors més potents de morbiditat i mortalitat cardíaca i la seva quantificació té connotacions importants per al maneig i tractament dels pacients. D'aquesta manera, la IRM cardíaca és la tècnica d'imatge més exacta per a la valoració del ventricle esquerre. Per obtenir un diagnòstic precís i ràpid, es necessita un càlcul fiable de biomarcadors basat en imatges mitjançant un programa de processament d'imatges. Actualment, la majoria de les ferramentes emprades es basen en sistemes semiautomàtics de Diagnòstic Assistit per ordinador (CAD) que requereixen que l'expert clínic interaccioni amb ell, consumint un temps valuós dels professionals, l'objectiu dels quals només hauria de ser la interpretació dels resultats. S'està començant a introduir un canvi de paradigma al sector mèdic on els sistemes CAD totalment automàtics no requereixen cap tipus d'interacció amb l'usuari. Aquests sistemes estan dissenyats per calcular els biomarcadors necessaris per a un diagnòstic correcte sense afectar el flux de treball natural del metge i poden iniciar els seus càlculs en el moment en què es deixa la imatge dins del sistema d'arxius hospitalari. Els sistemes CAD automàtics, tot i ser molt considerats com un dels propers grans avanços en el món de la radiologia, són extremadament difícils de desenvolupar i depenen de les tecnologies d'Intel·ligència Artificial (IA) per assolir els estàndards mèdics. En aquest context, l'aprenentatge profund (DL) ha sorgit durant l'última dècada com la tecnologia amb més èxit per abordar aquest problema. Més concretament, les xarxes neuronals convolucionals (CNN) han estat una de les tècniques més utilitzades i estudiades per a l'anàlisi d'imatges, inclosa la imatge mèdica. En aquest treball es descriuen les principals aplicacions de CNN per a sistemes CAD totalment automàtics per ajudar en la rutina de diagnòstic clínic mitjançant ressonància magnètica cardíaca. El treball recull els principals punts a tenir en compte per desenvolupar aquest tipus de sistemes i presenta diferents resultats d'impacte en l'ús de CNN a la ressonància magnètica cardíaca, tots separats en tres projectes principals diferents, cobrint els problemes de la segmentació, estimació automàtica de *biomarcadores amb *explicabilidad i la detecció d'esdeveniments. El treball complet presentat descriu enfocaments nous i potents per aplicar CNN a l'anàlisi de ressonància magnètica cardíaca. El treball proporciona diversos descobriments clau, que permeten la integració de diverses maneres d'aquesta tecnologia nova però en constant creixement en sistemes CAD totalment automàtics que podrien produir resultats altament precisos, ràpids i fiables. Els resultats descrits milloraran i afectaran considerablement el flux de treball dels experts clínics en un futur proper. / [EN] Cardiovascular diseases are one of the most predominant causes of death and comorbidity in developed countries, as such heavy investments have been done in recent decades in order to produce high quality diagnosis tools and treatment applications for cardiac diseases. One of the best proven tools to characterize the heart has been magnetic resonance imaging (MRI), thanks to its high-resolution capabilities in both spatial and temporal dimensions, allowing to generate dynamic imaging of the heart that enable accurate diagnosis. The dimensions of the left ventricle and the ejection fraction derived from them are the most powerful predictors of cardiac morbidity and mortality, and their quantification has important connotations for the management and treatment of patients. Thus, cardiac MRI is the most accurate imaging technique for left ventricular assessment. In order to get an accurate and fast diagnosis, reliable image-based biomarker computation through image processing software is needed. Nowadays most of the employed tools rely in semi-automatic Computer-Aided Diagnosis (CAD) systems that require the clinical expert to interact with it, consuming valuable time from the professionals whose aim should only be at interpreting results. A paradigm shift is starting to get into the medical sector where fully automatic CAD systems do not require any kind of user interaction. These systems are designed to compute any required biomarkers for a correct diagnosis without impacting the physician natural workflow and can start their computations the moment an image is saved within a hospital archive system. Automatic CAD systems, although being highly regarded as one of next big advances in the radiology world, are extremely difficult to develop and rely on Artificial Intelligence (AI) technologies in order to reach medical standards. In this context, Deep learning (DL) has emerged in the past decade as the most successful technology to address this problem. More specifically, convolutional neural networks (CNN) have been one of the most successful and studied techniques for image analysis, including medical imaging. In this work we describe the main applications of CNN for fully automatic CAD systems to help in the clinical diagnostics routine by means of cardiac MRI. The work covers the main points to take into account in order to develop such systems and presents different impactful results within the use of CNN to cardiac MRI, all separated in three different main projects covering the segmentation, automatic biomarker estimation with explainability and event detection problems. The full work presented describes novel and powerful approaches to apply CNN to cardiac MRI analysis. The work provides several key findings, enabling the integration in several ways of this novel but non-stop growing technology into fully automatic CAD systems that could produce highly accurate, fast and reliable results. The results described will greatly improve and impact the workflow of the clinical experts in the near future. / Pérez Pelegrí, M. (2023). Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/192988
414

Použití rekurentních neuronových sítí pro automatické rozpoznávání řečníka, jazyka a pohlaví / Neural networks for automatic speaker, language, and sex identification

Do, Ngoc January 2016 (has links)
Title: Neural networks for automatic speaker, language, and sex identifica- tion Author: Bich-Ngoc Do Department: Institute of Formal and Applied Linguistics Supervisor: Ing. Mgr. Filip Jurek, Ph.D., Institute of Formal and Applied Linguistics and Dr. Marco Wiering, Faculty of Mathematics and Natural Sciences, University of Groningen Abstract: Speaker recognition is a challenging task and has applications in many areas, such as access control or forensic science. On the other hand, in recent years, deep learning paradigm and its branch, deep neural networks have emerged as powerful machine learning techniques and achieved state-of- the-art in many fields of natural language processing and speech technology. Therefore, the aim of this work is to explore the capability of a deep neural network model, recurrent neural networks, in speaker recognition. Our pro- posed systems are evaluated on TIMIT corpus using speaker identification task. In comparison with other systems in the same test conditions, our systems could not surpass reference ones due to the sparsity of validation data. In general, our experiments show that the best system configuration is a combination of MFCCs with their dynamic features and a recurrent neural network model. We also experiment recurrent neural networks and convo- lutional neural...
415

Residual Capsule Network

Sree Bala Shrut Bhamidi (6990443) 13 August 2019 (has links)
<p>The Convolutional Neural Network (CNN) have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. Deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Residual Networks ease the training and have shown evidence that they can give good accuracy with considerable depth. Putting the best of Capsule Network and Residual Network together, we present Residual Capsule Network and 3-Level Residual Capsule Network, a framework that uses the best of Residual Networks and Capsule Networks. The conventional Convolutional layer in Capsule Network is replaced by skip connections like the Residual Networks to decrease the complexity of the Baseline Capsule Network and seven ensemble Capsule Network. We trained our models on MNIST and CIFAR-10 datasets and have seen a significant decrease in the number of parameters when compared to the Baseline models.</p>
416

Vision based facial emotion detection using deep convolutional neural networks

Julin, Fredrik January 2019 (has links)
Emotion detection, also known as Facial expression recognition, is the art of mapping an emotion to some sort of input data taken from a human. This is a powerful tool to extract valuable information from individuals which can be used as data for many different purposes, ranging from medical conditions such as depression to customer feedback. To be able to solve the problem of facial expression recognition, smaller subtasks are required and all of them together form the complete system to the problem. Breaking down the bigger task at hand, one can think of these smaller subtasks in the form of a pipeline that implements the necessary steps for classification of some input to then give an output in the form of emotion. In recent time with the rise of the art of computer vision, images are often used as input for these systems and have shown great promise to assist in the task of facial expression recognition as the human face conveys the subjects emotional state and contain more information than other inputs, such as text or audio. Many of the current state-of-the-art systems utilize computer vision in combination with another rising field, namely AI, or more specifically deep learning. These proposed methods for deep learning are in many cases using a special form of neural network called convolutional neural network that specializes in extracting information from images. Then performing classification using the SoftMax function, acting as the last part before the output in the facial expression pipeline. This thesis work has explored these methods of utilizing convolutional neural networks to extract information from images and builds upon it by exploring a set of machine learning algorithms that replace the more commonly used SoftMax function as a classifier, in attempts to further increase not only the accuracy but also optimize the use of computational resources. The work also explores different techniques for the face detection subtask in the pipeline by comparing two approaches. One of these approaches is more frequently used in the state-of-the-art and is said to be more viable for possible real-time applications, namely the Viola-Jones algorithm. The other is a deep learning approach using a state-of-the-art convolutional neural network to perform the detection, in many cases speculated to be too computationally intense to run in real-time. By applying a state-of-the-art inspired new developed convolutional neural network together with the SoftMax classifier, the final performance did not reach state-of-the-art accuracy. However, the machine-learning classifiers used shows promise and bypass the SoftMax function in performance in several cases when given a massively smaller number of samples as training. Furthermore, the results given from implementing and testing a pure deep learning approach, using deep learning algorithms for both the detection and classification stages of the pipeline, shows that deep learning might outperform the classic Viola-Jones algorithm in terms of both detection rate and frames per second.
417

Venezuela en la mira: el discurso de clase de CNN en español y Telesur

Aguirre, Roberto Atilio, Idiart, Guillermo January 2007 (has links)
Esta tesis se propone analizar la cobertura de la señal televisiva CNN en Español sobre las elecciones presidenciales de Venezuela llevadas a cabo el 3 de diciembre de 2006. Su objetivo es dar cuenta de la intencionalidad editorial del medio, es decir, su discurso de clase presentado como objetivo e imparcial. Esta investigación revisará la historia reciente de Venezuela; la conformación corporativa, así como el comportamiento histórico de CNN en Español; y realizará un análisis cuantitativo y cualitativo de la cobertura que la señal realizó sobre las elecciones presidenciales en Venezuela. La intención es cruzar los datos para observar la parcialidad del medio y poder contextualizarla. De esta forma, esta tesis representa un observatorio de medios destinado a analizar el discurso de clase de CNN en Español y su lugar dentro de la lucha por el poder en Venezuela / Programa de investigación: Comunicación y Política
418

Desirability, Values and Ideology in CNN Travel -- Discourse Analysis on Travel Stories

Laine, Emmi January 2013 (has links)
Title: Values, Desirability and Ideology in CNN Travel -- a Discourse Analysis on Travel Stories Author: Emmi Laine Course: Journalistikvetenskap, Kandidatkurs, H13 J Kand (Bachelor of Journalism, Fall 2013), JMK, Stockholm University, Sweden Aim: The aim is to examine which values and ideologies CNN Travel fulfills in their stories. Method: Qualitative discourse analysis. Summary: This Bachelor ́s thesis asks what is desirable, which are the values of CNN Travel, the major U.S. news corporation CNN ́s online travel site. The question has been answered through a qualitative discourse analysis on 20 chosen travel stories, picked by their relevancy, diversity, and their expressive tone. Due to the limited space and the specific textual method, the analysis was restricted to the editorial texts of these stories. The chosen method was discourse analyst Norman Fairclough ́s model of evaluation, which revealed the explicit and implicit ways the media texts suggest desired characteristics. These linguistic devices took the readers ́ agreement for granted, as they imposed a shared cultural ground with common values, which is a base for a mutual understanding. After identifying the explicit and implicit evaluations, they were organized according to some major discursive themes found in the texts, and finally analyzed in order to expose their underlying values. The results showed how these certain values brought forth certain ideologies, to some extent in keeping with recent research of tourism and travel journalism. As the study has been put into a larger context of related research, the following pages will first explain some larger concepts of discourse analysis, such as representation, cultural stereotypes, ideology and power. A cross-section from older to more contemporary theories in culture studies has been utilized; moving from Edward Said ́s postcolonial classic Orientalism, an example of cultural stereotyping, to the more recent topics of ‘promotion culture’ and consumerism, and tourism researcher John Urry ́s ideas about the consumption of places and the ‘tourist gaze.’ In the end, the study considers what kind of power does travel journalism possess over the represented tourism destinations. Finally, when questioning the travel journalists ́ legitimacy and power to represent the travel destinations, poststructuralist Michel Foucault ́s theory about the ‘regime of truth,’ as well as Antonio Gramsci ́s ideas of ‘hegemony,’ theory of dominance through consent, were discussed and confirmed.
419

Road Surface Preview Estimation Using a Monocular Camera

Ekström, Marcus January 2018 (has links)
Recently, sensors such as radars and cameras have been widely used in automotives, especially in Advanced Driver-Assistance Systems (ADAS), to collect information about the vehicle's surroundings. Stereo cameras are very popular as they could be used passively to construct a 3D representation of the scene in front of the car. This allowed the development of several ADAS algorithms that need 3D information to perform their tasks. One interesting application is Road Surface Preview (RSP) where the task is to estimate the road height along the future path of the vehicle. An active suspension control unit can then use this information to regulate the suspension, improving driving comfort, extending the durabilitiy of the vehicle and warning the driver about potential risks on the road surface. Stereo cameras have been successfully used in RSP and have demonstrated very good performance. However, the main disadvantages of stereo cameras are their high production cost and high power consumption. This limits installing several ADAS features in economy-class vehicles. A less expensive alternative are monocular cameras which have a significantly lower cost and power consumption. Therefore, this thesis investigates the possibility of solving the Road Surface Preview task using a monocular camera. We try two different approaches: structure-from-motion and Convolutional Neural Networks.The proposed methods are evaluated against the stereo-based system. Experiments show that both structure-from-motion and CNNs have a good potential for solving the problem, but they are not yet reliable enough to be a complete solution to the RSP task and be used in an active suspension control unit.
420

Investigations of calorimeter clustering in ATLAS using machine learning

Niedermayer, Graeme 11 January 2018 (has links)
The Large Hadron Collider (LHC) at CERN is designed to search for new physics by colliding protons with a center-of-mass energy of 13 TeV. The ATLAS detector is a multipurpose particle detector built to record these proton-proton collisions. In order to improve sensitivity to new physics at the LHC, luminosity increases are planned for 2018 and beyond. With this greater luminosity comes an increase in the number of simultaneous proton-proton collisions per bunch crossing (pile-up). This extra pile-up has adverse effects on algorithms for clustering the ATLAS detector's calorimeter cells. These adverse effects stem from overlapping energy deposits originating from distinct particles and could lead to difficulties in accurately reconstructing events. Machine learning algorithms provide a new tool that has potential to improve clustering performance. Recent developments in computer science have given rise to new set of machine learning algorithms that, in many circumstances, out-perform more conventional algorithms. One of these algorithms, convolutional neural networks, has been shown to have impressive performance when identifying objects in 2d or 3d arrays. This thesis will develop a convolutional neural network model for calorimeter cell clustering and compare it to the standard ATLAS clustering algorithm. / Graduate

Page generated in 0.2679 seconds