• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 475
  • 88
  • 87
  • 56
  • 43
  • 21
  • 14
  • 14
  • 11
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 990
  • 321
  • 204
  • 184
  • 169
  • 165
  • 155
  • 138
  • 124
  • 104
  • 97
  • 95
  • 93
  • 88
  • 83
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Proposta para aceleração de desempenho de algoritmos de visão computacional em sistemas embarcados / Proposed algorithms performance acceleration computer vision in embedded systems

Curvello, André Márcio de Lima 10 June 2016 (has links)
O presente trabalho apresenta um benchmark para avaliar o desempenho de uma plataforma embarcada WandBoard Quad no processamento de imagens, considerando o uso da sua GPU Vivante GC2000 na execução de rotinas usando OpenGL ES 2.0. Para esse fim, foi tomado por base a execução de filtros de imagem em CPU e GPU. Os filtros são as aplicações mais comumente utilizadas em processamento de imagens, que por sua vez operam por meio de convoluções, técnica esta que faz uso de sucessivas multiplicações matriciais, o que justifica um alto custo computacional dos algoritmos de filtros de imagem em processamento de imagens. Dessa forma, o emprego da GPU em sistemas embarcados é uma interessante alternativa que torna viável a realização de processamento de imagem nestes sistemas, pois além de fazer uso de um recurso presente em uma grande gama de dispositivos presentes no mercado, é capaz de acelerar a execução de algoritmos de processamento de imagem, que por sua vez são a base para aplicações de visão computacional tais como reconhecimento facial, reconhecimento de gestos, dentre outras. Tais aplicações tornam-se cada vez mais requisitadas em um cenário de uso e consumo em aplicações modernas de sistemas embarcados. Para embasar esse objetivo foram realizados estudos comparativos de desempenho entre sistemas e entre bibliotecas capazes de auxiliar no aproveitamento de recursos de processadores multicore. Para comprovar o potencial do assunto abordado e fundamentar a proposta do presente trabalho, foi realizado um benchmark na forma de uma sequência de testes, tendo como alvo uma aplicação modelo que executa o algoritmo do Filtro de Sobel sobre um fluxo de imagens capturadas de uma webcam. A aplicação foi executada diretamente na CPU e também na GPU embarcada. Como resultado, a execução em GPU por meio de OpenGL ES 2.0 alcançou desempenho quase 10 vezes maior com relação à execução em CPU, e considerando tempos de readback, obteve ganho de desempenho total de até 4 vezes. / This work presents a benchmark for evaluating the performance of an embedded WandBoard Quad platform in image processing, considering the use of its GPU Vivante GC2000 in executing routines using OpenGL ES 2.0. To this goal, it has relied upon the execution of image filters in CPU and GPU. The filters are the most commonly applications used in image processing, which in turn operate through convolutions, a technique which makes use of successive matrix multiplications, which justifies a high computational cost of image filters algorithms for image processing. Thus, the use of the GPU for embedded systems is an interesting alternative that makes it feasible to image processing performing in these systems, as well as make use of a present feature in a wide range of devices on the market, it is able to accelerate image processing algorithms, which in turn are the basis for computer vision applications such as facial recognition, gesture recognition, among others. Such applications become increasingly required in a consumption and usage scenario in modern applications of embedded systems. To support this goal were carried out a comparative studies of performance between systems and between libraries capable of assisting in the use of multicore processors resources. To prove the potential of the subject matter and explain the purpose of this study, it was performed a benchmark in the form of a sequence of tests, targeting a model application that runs Sobel filter algorithm on a stream of images captured from a webcam. The application was performed directly on the embbedded CPU and GPU. As a result, running on GPU via OpenGL ES 2.0 performance achieved nearly 10 times higher with respect to the running CPU, and considering readback times, achieved total performance gain of up to 4 times.
342

Hybrid parallel algorithms for solving nonlinear Schrödinger equation / Hibridni paralelni algoritmi za rešavanje nelinearne Šredingerove jednačine

Lončar Vladimir 17 October 2017 (has links)
<p>Numerical methods and algorithms for solving of partial differential equations, especially parallel algorithms, are an important research topic, given the very broad applicability range in all areas of science. Rapid advances of computer technology open up new possibilities for development of faster algorithms and numerical simulations of higher resolution. This is achieved through paralleliza-tion at different levels that&nbsp; practically all current computers support.</p><p>In this thesis we develop parallel algorithms for solving one kind of partial differential equations known as nonlinear Schr&ouml;dinger equation (NLSE) with a convolution integral kernel. Equations of this type arise in many fields of physics such as nonlinear optics, plasma physics and physics of ultracold atoms, as well as economics and quantitative&nbsp; finance. We focus on a special type of NLSE, the dipolar Gross-Pitaevskii equation (GPE), which characterizes the behavior of ultracold atoms in the state of Bose-Einstein condensation.</p><p>We present novel parallel algorithms for numerically solving GPE for a wide range of modern parallel computing platforms, from shared memory systems and dedicated hardware accelerators in the form of graphics processing units (GPUs), to&nbsp;&nbsp; heterogeneous computer clusters. For shared memory systems, we provide an algorithm and implementation targeting multi-core processors us-ing OpenMP. We also extend the algorithm to GPUs using CUDA toolkit and combine the OpenMP and CUDA approaches into a hybrid, heterogeneous al-gorithm that is capable of utilizing all&nbsp; available resources on a single computer. Given the inherent memory limitation a single&nbsp; computer has, we develop a distributed memory algorithm based on Message Passing Interface (MPI) and previous shared memory approaches. To maximize the performance of hybrid implementations, we optimize the parameters governing the distribution of data&nbsp; and workload using a genetic algorithm. Visualization of the increased volume of output data, enabled by the efficiency of newly developed algorithms, represents a challenge in itself. To address this, we integrate the implementations with the state-of-the-art visualization tool (VisIt), and use it to study two use-cases which demonstrate how the developed programs can be applied to simulate real-world systems.</p> / <p>Numerički metodi i algoritmi za re&scaron;avanje parcijalnih diferencijalnih jednačina, naročito paralelni algoritmi, predstavljaju izuzetno značajnu oblast istraživanja, uzimajući u obzir veoma &scaron;iroku primenljivost u svim oblastima nauke. Veliki napredak informacione tehnologije otvara nove mogućnosti za razvoj bržih al-goritama i&nbsp; numeričkih simulacija visoke rezolucije. Ovo se ostvaruje kroz para-lelizaciju na različitim nivoima koju poseduju praktično svi moderni računari. U ovoj tezi razvijeni su paralelni algoritmi za re&scaron;avanje jedne vrste parcijalnih diferencijalnih jednačina poznate kao nelinearna &Scaron;redingerova jednačina sa inte-gralnim konvolucionim kernelom. Jednačine ovog tipa se javljaju u raznim oblas-tima fizike poput nelinearne optike, fizike plazme i fizike ultrahladnih atoma, kao i u ekonomiji i kvantitativnim finansijama. Teza se bavi posebnim oblikom nelinearne &Scaron;redingerove jednačine, Gros-Pitaevski jednačinom sa dipol-dipol in-terakcionim članom, koja karakteri&scaron;e pona&scaron;anje ultrahladnih atoma u stanju Boze-Ajn&scaron;tajn kondenzacije.<br />U tezi su predstavljeni novi paralelni algoritmi za numeričko re&scaron;avanje Gros-Pitaevski jednačine za &scaron;irok spektar modernih računarskih platformi, od sis-tema sa deljenom memorijom i specijalizovanih hardverskih akceleratora u ob-liku grafičkih procesora, do heterogenih računarskih klastera. Za sisteme sa deljenom memorijom, razvijen je&nbsp; algoritam i implementacija namenjena vi&scaron;e-jezgarnim centralnim procesorima&nbsp; kori&scaron;ćenjem OpenMP tehnologije. Ovaj al-goritam je pro&scaron;iren tako da radi i u&nbsp; okruženju grafičkih procesora kori&scaron;ćenjem CUDA alata, a takođe je razvijen i&nbsp; predstavljen hibridni, heterogeni algoritam koji kombinuje OpenMP i CUDA pristupe i koji je u stanju da iskoristi sve raspoložive resurse jednog računara.<br />Imajući u vidu inherentna ograničenja raspoložive memorije koju pojedinačan računar poseduje, razvijen je i algoritam za sisteme sa distribuiranom memorijom zasnovan na Message Passing Interface tehnologiji i prethodnim algoritmima za sisteme sa deljenom memorijom. Da bi se maksimalizovale performanse razvijenih hibridnih implementacija, parametri koji određuju raspodelu podataka i računskog opterećenja su optimizovani kori&scaron;ćenjem genetskog algoritma. Poseban izazov je vizualizacija povećane količine izlaznih podataka, koji nastaju kao rezultat efikasnosti novorazvijenih algoritama. Ovo je u tezi re&scaron;eno kroz inte-graciju implementacija sa najsavremenijim alatom za vizualizaciju (VisIt), &scaron;to je omogućilo proučavanje dva primera koji pokazuju kako razvijeni programi mogu da se iskoriste za simulacije realnih sistema.</p>
343

Restauração de imagens de microscopia de força atômica com uso da regularização de Tikhonov via processamento em GPU / Image restoration from atomic force microscopy using the Tikhonov regularization via GPU processing

Augusto Garcia Almeida 04 March 2013 (has links)
A Restauração de Imagens é uma técnica que possui aplicações em várias áreas, por exemplo, medicina, biologia, eletrônica, e outras, onde um dos objetivos da restauração de imagens é melhorar o aspecto final de imagens de amostras que por algum motivo apresentam imperfeições ou borramentos. As imagens obtidas pelo Microscópio de Força Atômica apresentam borramentos causados pela interação de forças entre a ponteira do microscópio e a amostra em estudo. Além disso apresentam ruídos aditivos causados pelo ambiente. Neste trabalho é proposta uma forma de paralelização em GPU de um algoritmo de natureza serial que tem por fim a Restauração de Imagens de Microscopia de Força Atômica baseado na Regularização de Tikhonov. / Image Restoration is a technique which has applications in several areas, e.g., medicine, biology, electronics, and others, where one of the goals is to improve the final appearance of the images of samples, that have for some reason, imperfections or blurring. The images obtained by Atomic Force Microscope have blurring caused by the interaction forces between the tip of the microscope and the sample under study. Moreover exhibit additive noise caused by the environment. This thesis proposes a way to make a parallelization on a GPU of a serial algorithm of which is a Image Restoration of Images from Atomic Force Microscopy using Tikhonov Regularization.
344

Suivi de caméra image en temps réel base et cartographie de l'environnement / Real-time image-based RGB-D camera motion tracking and environment mapping

Tykkälä, Tommi 04 September 2013 (has links)
Dans ce travail, méthodes d'estimation basées sur des images, également connu sous le nom de méthodes directes, sont étudiées qui permettent d'éviter l'extraction de caractéristiques et l'appariement complètement. L'objectif est de produire pose 3D précis et des estimations de la structure. Les fonctions de coût présenté minimiser l'erreur du capteur, car les mesures ne sont pas transformés ou modifiés. Dans la caméra photométrique estimation de la pose, rotation 3D et les paramètres de traduction sont estimées en minimisant une séquence de fonctions de coûts à base d'image, qui sont des non-linéaires en raison de la perspective projection et la distorsion de l'objectif. Dans l'image la structure basée sur le raffinement, d'autre part, de la structure 3D est affinée en utilisant un certain nombre de vues supplémentaires et un coût basé sur l'image métrique. Les principaux domaines d'application dans ce travail sont des reconstitutions d'intérieur, la robotique et la réalité augmentée. L'objectif global du projet est d'améliorer l'image des méthodes d'estimation fondées, et pour produire des méthodes de calcul efficaces qui peuvent être accueillis dans des applications réelles. Les principales questions pour ce travail sont : Qu'est-ce qu'une formulation efficace pour une image 3D basé estimation de la pose et de la structure tâche de raffinement ? Comment organiser calcul afin de permettre une mise en œuvre efficace en temps réel ? Quelles sont les considérations pratiques utilisant l'image des méthodes d'estimation basées sur des applications telles que la réalité augmentée et la reconstruction 3D ? / In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are usable whenever the Lambertian illumination assumption holds, where 3D points have constant color despite viewing angle. The main application domains in this work are indoor 3D reconstructions, robotics and augmented reality. The overall project goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real applications. The main questions for this work are : What is an efficient formulation for an image based 3D pose estimation and structure refinement task ? How to organize computation to enable an efficient real-time implementation ? What are the practical considerations of using image based estimation methods in applications such as augmented reality and 3D reconstruction ?
345

Parallel Sorting on the Heterogeneous AMD Fusion Accelerated Processing Unit

Delorme, Michael Christopher 18 March 2013 (has links)
We explore efficient parallel radix sort for the AMD Fusion Accelerated Processing Unit (APU). Two challenges arise: efficiently partitioning data between the CPU and GPU and the allocation of data in memory regions. Our coarse-grained implementation utilizes both the GPU and CPU by sharing data at the begining and end of the sort. Our fine-grained implementation utilizes the APU’s integrated memory system to share data throughout the sort. Both these implementations outperform the current state of the art GPU radix sort from NVIDIA. We therefore demonstrate that the CPU can be efficiently used to speed up radix sort on the APU. Our fine-grained implementation slightly outperforms our coarse-grained implementation. This demonstrates the benefit of the APU’s integrated architecture. This performance benefit is hindered by limitations in the APU’s architecture and programming model. We believe that the performance benefits will increase once these limitations are addressed in future generations of the APU.
346

Greitas ir tikslus objekto parametrų nustatymas mašininės regos sistemose / Fast and accurate object parameters detection in machine vision system

Kazakevičius, Tadas 10 June 2011 (has links)
Objekto atpažinimas ir pozicijos nustatymas gali būti pritaikomas daugeliui pramonėje egzistuojančių uždavinių. Šio darbo pagrindinis tikslas yra sukurti mašininės regos sistemą, kuria būtų galima greitai ir tiksliai rasti objekto poziciją pagal pasirinktą objekto modelį. Šiame darbe gilinamasi į GPU veikimo principus ir privalumus apdorojant vaizdus GLSL programavimo kalba. Apžvelgiami praktikoje taikomų metodų, skirtų objekto pozicijai nustatyti, veikimo principai, jų privalumai ir trūkumai. Taip pat šiame darbe aprašomas suformuotas ir įgyvendintas realaus laiko metodas, naudojantis GPU teikiama sparta atlikti vartotojo pasirinkto modelio paiešką. Pabaigoje pateikiami pasiekti įgyvendinto metodo spartos rodikliai, privalumai ir trūkumai. Darbą sudaro: įvadas, mašininėje regoje pasitaikančių problemų tyrinėjimas, objekto paieškos metodų apžvalga, darbo su grafinėmis vaizdo plokštėmis privalumai ir trūkumai, objekto paieškos su grafine vaizdo plokšte metodas, pasiekti rezultatai, išvados ir literatūros sąrašas. Darbo apimtis – 53 p. teksto be priedų, 30 pav., 2 lent., 26 literatūros šaltiniai. / Object recognition and parameter detection could be used in many areas from electronics to food industry. One of the most important problems in laser industry is to transform laser work trajectories based on constant object model. In the real life applications model could be rotated or translated due to the fact that object must be put in laser work area. The translation and rotation of object must be found to fit user defined constant model. There are many methods for object parameters detection, but image processing tasks require a lot of computing power. Recent research on image processing with graphics processing units - GPU, shows huge performance results compared with central processing units – CPU. The purpose of this work is to find out the main fundamentals for fast and accurate object parameter detection in machine vision systems. In this work it is focused on object parameter detection with GPU. Moreover, the analysis and comparison of different object parameters detection methods are proposed. Object parameter detection system was implemented with C++ and GLSL shading language, thus the system could be adapted to different computer hardware and operating systems. Work size – 53 p. text, 30 illustrations, 2 tables, 26 bibliographic sources.
347

Parallel Sorting on the Heterogeneous AMD Fusion Accelerated Processing Unit

Delorme, Michael Christopher 18 March 2013 (has links)
We explore efficient parallel radix sort for the AMD Fusion Accelerated Processing Unit (APU). Two challenges arise: efficiently partitioning data between the CPU and GPU and the allocation of data in memory regions. Our coarse-grained implementation utilizes both the GPU and CPU by sharing data at the begining and end of the sort. Our fine-grained implementation utilizes the APU’s integrated memory system to share data throughout the sort. Both these implementations outperform the current state of the art GPU radix sort from NVIDIA. We therefore demonstrate that the CPU can be efficiently used to speed up radix sort on the APU. Our fine-grained implementation slightly outperforms our coarse-grained implementation. This demonstrates the benefit of the APU’s integrated architecture. This performance benefit is hindered by limitations in the APU’s architecture and programming model. We believe that the performance benefits will increase once these limitations are addressed in future generations of the APU.
348

Visualização da curvatura de objetos implícitos em um sistema extensável. / Curvature visualization of implicit objects in a extensible system.

Cabral, Allyson Ney Teodosio 11 February 2010 (has links)
In this work we study the curvature visualization problem on surfaces implicitly defined by functions f: [0,1]³ &#8594; [0,1], using the ray casting technique. As we usually know only sampled values of f, we study the tricubic interpolation method to compute second order derivatives accurately. This work's implementation was designed as modules to the framework for volume rendering and image processing named Voreen, that uses the processing capability of graphics cards to improve the rendering tasks. / Fundação de Amparo a Pesquisa do Estado de Alagoas / Neste trabalho, estudaremos a visualização da curvatura de superfícies definidas implicitamente por funções do tipo f:[0,1]³ [0,1], usando a técnica de lançamento de raios (ray casting). Como em geral conhecemos apenas valores amostrados de f, estudaremos um método de interpolação tricúbica, a fim de calcular as derivadas de segunda ordem precisamente. A implementação computacional deste trabalho foi desenvolvida na forma de módulos do framework de visualização e processamento de imagens Voreen, o qual se beneficia do poder de processamento das placas gráficas atuais para acelerar o processo de visualização.
349

Mesure d’un champ de masse volumique par Background Oriented Schlieren 3D. Étude d’un dispositif expérimental et des méthodes de traitement pour la résolution du problème inverse / Density field maasurements using bos 3D : study od an experimental setup and inverse problem resolution

Todoroff, Violaine 09 December 2013 (has links)
Ces travaux consistent à mettre en place un dispositif expérimental BOS3D permettant la reconstruction du champ de masse volumique instantané d'un écoulement ainsi qu'à développer un algorithme de reconstruction permettant une mise à disposition rapide des résultats ainsi qu'une robustesse liée à un nombre faible de points de vue. La démarche a consisté dans un premier temps à développer un algorithme de reconstruction BOS3D applicable à toutes les configurations expérimentales. Pour cela, le problème direct a été reformulé sous forme algébrique et un critère a été défini. Cette formulation ainsi que les équations issues des méthodes d'optimisation nécessaires à la minimisation du critère ont été parallélisés pour permettre une implémentation sur GPU. Cet algorithme a ensuite été testé sur des cas de références issus de calcul numérique afin de vérifier si le champ reconstruit par l'algorithme était en accord avec celui fourni. Dans ce cadre, nous avons développé un outil permettant de simuler numériquement une BOS3D afin d'obtenir les champs de déviation associées aux écoulements numériques. Ces champs de déviation ont ensuite été fournis comme entrée au code et nous ont permis d'étudier la sensibilité de notre algorithme à de nombreux paramètres tels que le bruit sur les données, les erreurs de calibration, la discrétisation du maillage... Ensuite, afin de tester notre code sur des données réelles nous avons mis en place un banc expérimental BOS3D pour la reconstruction du champ de masse volumique instantané. Cela a nécessité l'étude d'un nouveau moyen de mesure, faisant appel à des techniques de calibrage multi-caméras et de nouvelles stratégies d'illumination su fond. Finalement les données issues de l'expérimentation ont été utilisées comme entrée de notre algorithme afin de valider son comportement sur données réelles. / This thesis consists in implementing a scientific experimental apparatus BOS3D (Background Oriented Schlieren 3D) at ONERA for the reconstruction of the density field of instantaneous flow and to develop a robust reconstruction algorithm allowing rapid provision of results considering a small number of views . First, we have developed a reconstruction algorithm BOS3D applicable to all experimental configurations . To do so, the direct problem, that is to say, the equation of the light deflection through a medium of inhomogeneous optical index was reformulated in algebraic form. experiment is then defined. This formulation and the equations resulting from optimization methods necessary for the minimization of the criterion have been parallelized to allow implementation on GPU. This algorithm was then tested with reference cases from numerical calculation to check whether the reconstructed field was consistent with the one provided. In this context, we developed a tool for simulating a virtual BOS3D to obtain the deflection fields associated with numerical flows. These deflection fields were then provided as input to the code of reconstruction and allowed us to study the sensitivity of our algorithm to many parameters such as noise on the data, mesh discretization, the type of regularization and positioning of the cameras. In parallel with the study of the reconstruction method by simulation, we have gained experience in the effective implementation of the BOS technique in experimental facilities, participating in several test campaigns . This has enabled us to contribute to the design and implementation of experimental apparatus dedicated to the BOS technique. The main result of this work is the realization of the experimental apparatus BOS3D in DMAE, which is designed to reconstruct instantaneous density fields. These experimental developments ultimately allow us to obtain 3D reconstructions of instantaneous and mean density fields from experimental data. In addition , analysis of the behavior of the BOS3D numerical method is proposed according to the nature of the observed flows and the acquisition configuration
350

Méthode de reconstruction adaptive en tomographie par rayons X : optimisation sur architectures parallèles de type GPU / Development of a 3D adaptive shape algorithm for X-ray tomography reconstruction : speed-up on GPU and application to NDT

Quinto, Michele Arcangelo 05 April 2013 (has links)
La reconstruction tomographique à partir de données de projections est un problème inverse largement utilisé en imagerie médicale et de façon plus modeste pour le contrôle nondestructif. Avec un nombre suffisant de projections, les algorithmes analytiques permettentdes reconstructions rapides et précises. Toutefois, dans le cas d’un faible nombre de vues(imagerie faible dose) et/ou d’angle limité (contraintes spécifiques liées à l’installation), lesdonnées disponibles pour l’inversion ne sont pas complètes, le mauvais conditionnementdu problème s’accentue, et les résultats montrent des artefacts importants. Pour aborderces situations, une approche alternative consiste à discrétiser le problème de reconstruction,et à utiliser des algorithmes itératifs ou une formulation statistique du problème afinde calculer une estimation de l’objet inconnu. Ces méthodes sont classiquement basées surune discrétisation du volume en un ensemble de voxels, et fournissent des cartes 3D de ladensité de l’objet étudié. Les temps de calcul et la ressource mémoire de ces méthodesitératives sont leurs principaux points faibles. Par ailleurs, quelle que soit l’application, lesvolumes sont ensuite segmentés pour une analyse quantitative. Devant le large éventaild’outils de segmentation existant, basés sur différentes interprétations des contours et defonctionnelles à minimiser, les choix sont multiples et les résultats en dépendent.Ce travail de thèse présente une nouvelle approche de reconstruction simultanée àla segmentation des différents matériaux qui composent le volume. Le processus dereconstruction n’est plus basé sur une grille régulière de pixels (resp. voxels), mais sur unmaillage composé de triangles (resp. tétraèdres) non réguliers qui s’adaptent à la formede l’objet. Après une phase d’initialisation, la méthode se décompose en trois étapesprincipales que sont la reconstruction, la segmentation et l’adaptation du maillage, quialternent de façon itérative jusqu’à convergence. Des algorithmes itératifs de reconstructioncommunément utilisés avec une représentation conventionnelle de l’image ont étéadaptés et optimisés pour être exécutés sur des grilles irrégulières composées d’élémentstriangulaires ou tétraédriques. Pour l’étape de segmentation, deux méthodes basées surune approche paramétrique (snake) et l’autre sur une approche géométrique (level set)ont été mises en oeuvre afin de considérer des objets de différentes natures (mono- etmulti- matériaux). L’adaptation du maillage au contenu de l’image estimée est basée surles contours segmentés précédemment, pour affiner la maille au niveau des détails del’objet et la rendre plus grossière dans les zones contenant peu d’information. En finde processus, le résultat est une image classique de reconstruction tomographique enniveaux de gris, mais dont la représentation par un maillage adapté au contenu proposeidirectement une segmentation associée. Les résultats montrent que la partie adaptative dela méthode permet de représenter efficacement les objets et conduit à diminuer drastiquementla mémoire nécessaire au stockage. Dans ce contexte, une version 2D du calcul desopérateurs de reconstruction sur une architecture parallèle type GPU montre la faisabilitédu processus dans son ensemble. Une version optimisée des opérateurs 3D permet descalculs encore plus efficaces. / Tomography reconstruction from projections data is an inverse problem widely used inthe medical imaging field. With sufficiently large number of projections over the requiredangle, the FBP (filtered backprojection) algorithms allow fast and accurate reconstructions.However in the cases of limited views (lose dose imaging) and/or limited angle (specificconstrains of the setup), the data available for inversion are not complete, the problembecomes more ill-conditioned, and the results show significant artifacts. In these situations,an alternative approach of reconstruction, based on a discrete model of the problem,consists in using an iterative algorithm or a statistical modelisation of the problem to computean estimate of the unknown object. These methods are classicaly based on a volumediscretization into a set of voxels and provide 3D maps of densities. Computation time andmemory storage are their main disadvantages. Moreover, whatever the application, thevolumes are segmented for a quantitative analysis. Numerous methods of segmentationwith different interpretations of the contours and various minimized energy functionalare offered, and the results can depend on their use.This thesis presents a novel approach of tomographic reconstruction simultaneouslyto segmentation of the different materials of the object. The process of reconstruction isno more based on a regular grid of pixels (resp. voxel) but on a mesh composed of nonregular triangles (resp. tetraedra) adapted to the shape of the studied object. After aninitialization step, the method runs into three main steps: reconstruction, segmentationand adaptation of the mesh, that iteratively alternate until convergence. Iterative algorithmsof reconstruction used in a conventionnal way have been adapted and optimizedto be performed on irregular grids of triangular or tetraedric elements. For segmentation,two methods, one based on a parametric approach (snake) and the other on a geometricapproach (level set) have been implemented to consider mono and multi materials objects.The adaptation of the mesh to the content of the estimated image is based on the previoussegmented contours that makes the mesh progressively coarse from the edges to thelimits of the domain of reconstruction. At the end of the process, the result is a classicaltomographic image in gray levels, but whose representation by an adaptive mesh toits content provide a correspoonding segmentation. The results show that the methodprovides reliable reconstruction and leads to drastically decrease the memory storage. Inthis context, the operators of projection have been implemented on parallel archituecturecalled GPU. A first 2D version shows the feasability of the full process, and an optimizedversion of the 3D operators provides more efficent compoutations.

Page generated in 0.0189 seconds