• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2656
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6261
  • 6261
  • 2005
  • 1525
  • 1195
  • 1149
  • 1028
  • 1001
  • 952
  • 927
  • 895
  • 799
  • 771
  • 661
  • 660
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Deep learning on large neuroimaging datasets

Jönemo, Johan January 2024 (has links)
Magnetic resonance imaging (MRI) is a medical imaging method that has become increasingly more important during the last 4 decades. This is partly because it allows us to acquire a 3D-representation of a part of the body without exposing patients to ionizing radiation. Furthermore, it also typically gives better contrast between soft tissues than x-ray based techniques such as CT. The image acquisition procedure of MRI is also much more flexible. One can vary the signal sequence, not only to change how different types of tissue map to different intensities, but also to measure flow, diffusion or even brain activity over time.  Machine learning has gained great impetus the last decade and a half. This is probably partly because of the work done on the mathematical foundations of machine learning done at the end of last century in conjunction with the availability of specialized massively parallel processors, originally developed as graphical processing units (GPUs), which are ideal for training or running machine learning models. The work presented in this thesis combines MRI and machine learning in order to leverage the large amounts of MRI-data available in open data sets, to address questions of clinical relevance about the brain.  The thesis comprises three studies. In the first one the subproblem which augmentation methods are useful in the larger context of classifying autism, was investigated. The second study is about predicting brain age. In particular it aims to construct light-weight models using the MRI volumes in a condensed form, so that the model can be trained in a short time and still reach good accuracy. The third study is a development of the previous that investigates other ways of condensing the brain volumes. / Magnetresonansavbildningar, ofta kallat MR eller MRI, är en bilddiagnostik-metod som har blivit allt viktigare under de senaste 40 åren. Detta på grund av att man kan erhålla 3D-bilder av kroppsdelar utan att utsätta patienter för joniserande strålning. Dessutom får man typiskt bättre kontraster mellan mjukdelar än man får med motsvarande genomlysningsmetod (CT, eller 3D röntgen). Själva bildinsamlingsförfarandet är också mera flexibelt med MR. Man kan genom att ändra program för utsända och registrerade signa-ler, inte bara ändra vad som framförallt framträder på bilden (t.ex. vatten, fett, H-densitet, o.s.v.) utan även mäta flöde och diffusion eller till och med hjärnaktivitet över tid. Maskininlärning har fått ett stort uppsving under 2010-talet, dels på grund av utveckling av teknologin för att träna och konstruera maskininlärningsmodeller dels på grund av tillgängligheten av massivt parallella specialprocessorer – initialt utvecklade för att generera datorgrafik. Detta arbete kombinerar MR med maskininlärning, för att dra nytta av de stora mängder MR data som finns samlad i öppna databaser, för att adressera frågor av kliniskt intresse angående hjärnan. Avhandlingen innehåller tre studier. I den första av dessa undersöks del-problemet vilken eller vilka metoder för att artificiellt utöka träningsdata som är bra vid klassificering om en person har autism. Det andra arbetet adresserar bedömning av så kallad "hjärn-ålder". Framför allt strävar arbetet efter att hitta lättviktsmodeller som använder en komprimerad form av varje hjärnvolym, och därmed snabbt kan tränas till att bedöma en persons ålder från en MR-volym av hjärnan. Det tredje arbetet utvecklar modellen från det föregående genom att undersöka andra typer av komprimering. / <p><strong>Funding:</strong> This research was supported by the Swedish research council (2017-04889), the ITEA/VINNOVA project ASSIST (Automation, Surgery Support and Intuitive 3D visualization to optimize workflow in IGT SysTems, 2021-01954), and the Åke Wiberg foundation (M20-0031, M21-0119, M22-0088).</p>
492

The accuracy, reliability, and practicality of Convolutional Neural Networks in classifying ultrasound images for improved breast cancer diagnosis

Qvarnlöf, Moa January 2024 (has links)
Traditionally, analysing images for patient diagnosis and personalised treatment planning hasbeen reliant only on human expertise. Now, with growing data volumes and the need for fasterprocessing, artificial intelligence (AI) has increased in importance, revolutionising medical imageanalysis.In this project, a computer-aided diagnosis (CAD) system using deep learning models (DLMs)was developed for ultrasound (US) breast cancer images. Two datasets were combined andused for model training and evaluation. The first dataset was older, larger, and moreestablished, while the second was smaller and recently published at the project start. Bothdatasets contained benign, malignant and normal cases, with an US image and a mask file foreach case. The mask file contained the segmented lesion in the US image. Two differentapproaches were used in this project. The first approach used only US images to train andevaluate models. The second approach created an overlaid image from the US image andmask file for each case, and inputted the overlay image to the model.Different techniques, including class weighting, data augmentation, and pre-processing, wereexplored to address class imbalance and enhance model performance. Class weighting anddata augmentation were both shown to even out class performance. Results indicate thataccuracy and minority class recall can be improved with pre-processing and data augmentation.Hyperparameter tuning optimised the model performance further. Approach 1 achieved anaccuracy of 83%, AUC of 87%, benign and normal recalls of 77%, and malignant recall of 95%.Approach 2 achieved an accuracy of 94%, AUC of 96%, benign recall of 97%, malignant recallof 87% and normal recall of 100%.For CAD systems to reach their full potential, they must be reliable and easy to use andinterpret for medical professionals. The field of CAD systems for US breast cancer imagesremains challenged by the lack of public comprehensive datasets. Models must be able togeneralise to diverse patient populations, and future work should focus on larger and morediverse datasets.
493

Deep Learning to Enhance Fluorescent Signals in Live Cell Imaging

Forsgren, Edvin January 2020 (has links)
No description available.
494

Exploração do paralelismo em arquiteturas para processamento de imagens e vídeo / Parallelism exploration in architectures for video and image processing

Soares, Andre Borin January 2007 (has links)
O processamento de vídeo e imagens é uma área de pesquisa de grande importância atualmente devido ao incremento de utilização de imagens nas mais variadas áreas de atividades: entretenimento, vigilância, supervisão e controle, medicina, e outras. Os algoritmos utilizados para reconhecimento, compressão, descompressão, filtragem, restauração e melhoramento de imagens apresentam freqüentemente uma demanda computacional superior àquela que os processadores convencionais podem oferecer, exigindo muitas vezes o desenvolvimento de arquiteturas dedicadas. Este documento descreve o trabalho realizado na exploração do espaço de projeto de arquiteturas para processamento de imagem e de vídeo, utilizando processamento paralelo. Várias características particulares deste tipo de arquitetura são apontadas. Uma nova técnica é apresentada, na qual Processadores Elementares (P.E.s) especializados trabalham de forma cooperativa sobre uma estrutura de comunicação em rede intra-chip / Nowadays video and image processing is a very important research area, because of its widespread use in a broad class of applications like entertainment, surveillance, control, medicine and many others. Some of the used algorithms to perform recognition, compression, decompression, filtering, restoration and enhancement of the images, require a computational power higher than the one available in conventional processors, requiring the development of dedicated architectures. This document presents the work developed in the design space exploration in the field of video and image processing architectures by the use of parallel processing. Many characteristics of this kind of architecture are pointed out. A novel technique is presented in which customized Processing Elements work in a cooperative way over a communication structure using a network on chip.
495

Exploração do paralelismo em arquiteturas para processamento de imagens e vídeo / Parallelism exploration in architectures for video and image processing

Soares, Andre Borin January 2007 (has links)
O processamento de vídeo e imagens é uma área de pesquisa de grande importância atualmente devido ao incremento de utilização de imagens nas mais variadas áreas de atividades: entretenimento, vigilância, supervisão e controle, medicina, e outras. Os algoritmos utilizados para reconhecimento, compressão, descompressão, filtragem, restauração e melhoramento de imagens apresentam freqüentemente uma demanda computacional superior àquela que os processadores convencionais podem oferecer, exigindo muitas vezes o desenvolvimento de arquiteturas dedicadas. Este documento descreve o trabalho realizado na exploração do espaço de projeto de arquiteturas para processamento de imagem e de vídeo, utilizando processamento paralelo. Várias características particulares deste tipo de arquitetura são apontadas. Uma nova técnica é apresentada, na qual Processadores Elementares (P.E.s) especializados trabalham de forma cooperativa sobre uma estrutura de comunicação em rede intra-chip / Nowadays video and image processing is a very important research area, because of its widespread use in a broad class of applications like entertainment, surveillance, control, medicine and many others. Some of the used algorithms to perform recognition, compression, decompression, filtering, restoration and enhancement of the images, require a computational power higher than the one available in conventional processors, requiring the development of dedicated architectures. This document presents the work developed in the design space exploration in the field of video and image processing architectures by the use of parallel processing. Many characteristics of this kind of architecture are pointed out. A novel technique is presented in which customized Processing Elements work in a cooperative way over a communication structure using a network on chip.
496

Exploração do paralelismo em arquiteturas para processamento de imagens e vídeo / Parallelism exploration in architectures for video and image processing

Soares, Andre Borin January 2007 (has links)
O processamento de vídeo e imagens é uma área de pesquisa de grande importância atualmente devido ao incremento de utilização de imagens nas mais variadas áreas de atividades: entretenimento, vigilância, supervisão e controle, medicina, e outras. Os algoritmos utilizados para reconhecimento, compressão, descompressão, filtragem, restauração e melhoramento de imagens apresentam freqüentemente uma demanda computacional superior àquela que os processadores convencionais podem oferecer, exigindo muitas vezes o desenvolvimento de arquiteturas dedicadas. Este documento descreve o trabalho realizado na exploração do espaço de projeto de arquiteturas para processamento de imagem e de vídeo, utilizando processamento paralelo. Várias características particulares deste tipo de arquitetura são apontadas. Uma nova técnica é apresentada, na qual Processadores Elementares (P.E.s) especializados trabalham de forma cooperativa sobre uma estrutura de comunicação em rede intra-chip / Nowadays video and image processing is a very important research area, because of its widespread use in a broad class of applications like entertainment, surveillance, control, medicine and many others. Some of the used algorithms to perform recognition, compression, decompression, filtering, restoration and enhancement of the images, require a computational power higher than the one available in conventional processors, requiring the development of dedicated architectures. This document presents the work developed in the design space exploration in the field of video and image processing architectures by the use of parallel processing. Many characteristics of this kind of architecture are pointed out. A novel technique is presented in which customized Processing Elements work in a cooperative way over a communication structure using a network on chip.
497

Analyse des lèvres pour reconnaissance des personnes

Saeed, Usman 12 February 2010 (has links) (PDF)
Dans cette thèse nous nous concentrons sur une caractéristique locale du visage humain que sont les lèvres en termes de pertinence et influence sur la reconnaissance de la personne. Une étude détaillée est réalisée à l'égard de différentes étapes, telles que la détection, l'évaluation, la normalisation et les applications liées de la bouche. Au départ, on présente un algorithme de détection des lèvres en fusionnant deux méthodes indépendantes. La première méthode est basée sur la détection de contours et la deuxième orientée sur la segmentation. On exploite leurs points forts en combinant les deux méthodes par fusion. Ensuite, on extrait les caractéristiques qui modélisent l'aspect comportemental du mouvement des lèvres lorsque la personne parle afin de les exploiter pour la reconnaissance des personnes. Les caractéristiques du comportement incluent des caractéristiques statiques, et des caractéristiques dynamiques en fonction du flux optique. Ces caractéristiques sont utilisées pour construire le modèle du client par une Mixture de Gaussiennes et enfin la classification est réalisée en utilisant une règle de décision bayésienne. Enfin, on propose une méthode de normalisation temporelle pour le traitement des variations du mouvement des lèvres pendant le discours. Étant donné plusieurs vidéos où une personne répète la même phrase plusieurs fois, nous étudions le mouvement des lèvres dans l'une de ces vidéos et on sélectionne certaines images clés comme images de synchronisation. Après, on synchronise le reste des vidéos par rapport au images clés de la première vidéo. Enfin toutes les vidéos sont normalisées temporellement par interpolation à l'aide de "morphing".
498

Computational Redundancy in Image Processing

Khalvati, Farzad January 2008 (has links)
This research presents a new performance improvement technique, window memoization, for software and hardware implementations of local image processing algorithms. Window memoization combines the memoization techniques proposed in software and hardware with a characteristic of image data, computational redundancy, to improve the performance (in software) and efficiency (in hardware) of local image processing algorithms. The computational redundancy of an image indicates the percentage of computations that can be skipped when performing a local image processing algorithm on the image. Our studies show that computational redundancy is inherited from two principal redundancies in image data: coding redundancy and interpixel redundancy. We have shown mathematically that the amount of coding and interpixel redundancy of an image has a positive effect on the computational redundancy of the image where a higher coding and interpixel redundancy leads to a higher computational redundancy. We have also demonstrated (mathematically and empirically) that the amount of coding and interpixel redundancy of an image has a positive effect on the speedup obtained for the image by window memoization in both software and hardware. Window memoization minimizes the number of redundant computations performed on an image by identifying similar neighborhoods of pixels in the image. It uses a memory, reuse table, to store the results of previously performed computations. When a set of computations has to be performed for the first time, the computations are performed and the corresponding result is stored in the reuse table. When the same set of computations has to be performed again in the future, the previously calculated result is reused and the actual computations are skipped. Implementing the window memoization technique in software speeds up the computations required to complete an image processing task. In software, we have developed an optimized architecture for window memoization and applied it to six image processing algorithms: Canny edge detector, morphological gradient, Kirsch edge detector, Trajkovic corner detector, median filter, and local variance. The typical speedups range from 1.2 to 7.9 with a maximum factor of 40. We have also presented a performance model to predict the speedups obtained by window memoization in software. In hardware, we have developed an optimized architecture that embodies the window memoization technique. Our hardware design for window memoization achieves high speedups with an overhead in hardware area that is significantly less than that of conventional performance improvement techniques. As case studies in hardware, we have applied window memoization to the Kirsch edge detector and median filter. The typical and maximum speedup factors in hardware are 1.6 and 1.8, respectively, with 40% less hardware in comparison to conventional optimization techniques.
499

Computational Redundancy in Image Processing

Khalvati, Farzad January 2008 (has links)
This research presents a new performance improvement technique, window memoization, for software and hardware implementations of local image processing algorithms. Window memoization combines the memoization techniques proposed in software and hardware with a characteristic of image data, computational redundancy, to improve the performance (in software) and efficiency (in hardware) of local image processing algorithms. The computational redundancy of an image indicates the percentage of computations that can be skipped when performing a local image processing algorithm on the image. Our studies show that computational redundancy is inherited from two principal redundancies in image data: coding redundancy and interpixel redundancy. We have shown mathematically that the amount of coding and interpixel redundancy of an image has a positive effect on the computational redundancy of the image where a higher coding and interpixel redundancy leads to a higher computational redundancy. We have also demonstrated (mathematically and empirically) that the amount of coding and interpixel redundancy of an image has a positive effect on the speedup obtained for the image by window memoization in both software and hardware. Window memoization minimizes the number of redundant computations performed on an image by identifying similar neighborhoods of pixels in the image. It uses a memory, reuse table, to store the results of previously performed computations. When a set of computations has to be performed for the first time, the computations are performed and the corresponding result is stored in the reuse table. When the same set of computations has to be performed again in the future, the previously calculated result is reused and the actual computations are skipped. Implementing the window memoization technique in software speeds up the computations required to complete an image processing task. In software, we have developed an optimized architecture for window memoization and applied it to six image processing algorithms: Canny edge detector, morphological gradient, Kirsch edge detector, Trajkovic corner detector, median filter, and local variance. The typical speedups range from 1.2 to 7.9 with a maximum factor of 40. We have also presented a performance model to predict the speedups obtained by window memoization in software. In hardware, we have developed an optimized architecture that embodies the window memoization technique. Our hardware design for window memoization achieves high speedups with an overhead in hardware area that is significantly less than that of conventional performance improvement techniques. As case studies in hardware, we have applied window memoization to the Kirsch edge detector and median filter. The typical and maximum speedup factors in hardware are 1.6 and 1.8, respectively, with 40% less hardware in comparison to conventional optimization techniques.
500

Object recognition through multiple viewpoints

Salam, Rosalina Abdul January 2000 (has links)
No description available.

Page generated in 0.236 seconds