• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1091
  • 239
  • 152
  • 123
  • 76
  • 51
  • 35
  • 24
  • 24
  • 23
  • 18
  • 16
  • 8
  • 7
  • 7
  • Tagged with
  • 2218
  • 322
  • 217
  • 175
  • 171
  • 169
  • 169
  • 163
  • 130
  • 128
  • 120
  • 118
  • 115
  • 112
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Diffusion MRI processing for multi-comportment characterization of brain pathology / Caractérisation de pathologies cérébrales par l’analyse de modèles multi-compartiment en IRM de diffusion

Hédouin, Renaud 12 June 2017 (has links)
L'imagerie pondérée en diffusion est un type d'acquisition IRM spécifique basé sur la direction de diffusion des molécules d'eau dans le cerveau. Cela permet, au moyen de plusieurs acquisitions, de modéliser la microstructure du cerveau, comme la matière blanche qui à une taille très inférieur à la résolution du voxel. L'obtention d'un grand nombre d'images nécessite, pour un usage clinique, des techniques d'acquisition ultra rapide tel que l'imagerie parallèle. Malheureusement, ces images sont entachées de large distorsions. Nous proposons une méthode de recalage par blocs basée sur l'acquisition d'images avec des directions de phase d'encodage opposées. Cette technique spécialement conçue pour des images écho planaires, mais qui peut être générique, corrige les images de façon robuste tout en fournissant un champs de déformation. Cette transformation est applicable à une série entière d'image de diffusion à partir d'une seule image b 0 renversée, ce qui permet de faire de la correction de distorsion avec un temps d'acquisition supplémentaire minimal. Cet algorithme de recalage, qui a été validé à la fois sur des données synthétiques et cliniques, est disponible avec notre programme de traitement d'images Anima. A partir de ces images de diffusion, nous sommes capable de construire des modèles de diffusion multi-compartiment qui représentent la microstructure complexe du cerveau. Pour pouvoir produire des analyses statistiques sur ces modèles, nous devons être capable de faire du recalage, du moyennage, ou encore de créer un atlas d'images. Nous proposons une méthode générale pour interpoler des modèles multi-compartiment comme un problème de simplification basé sur le partitionnement spectral. Cette technique qui est adaptable pour n'importe quel modèle, a été validé à la fois sur des données synthétiques et réelles. Ensuite à partir d'une base de données recalée, nous faisons des analyses statistiques en extrayant des paramètres au niveau du voxel. Une tractographie, spécifiquement conçue pour les modèles multi-compartiment, est aussi utilisée pour faire des analyses en suivant les fibres de matière blanche. Ces outils sont conçus et appliqués à des données réelles pour contribuer à la recherche de biomarqueurs pour les pathologies cérébrales. / Diffusion weighted imaging (DWI) is a specific type of MRI acquisition based on the direction of diffusion of the brain water molecule. Its allow, through several acquisitions, to model brain microstructure, as white matter, which are significantly smaller than the voxel-resolution. To acquire a large number of images in a clinical use, very-fast acquisition technique are required as single-shot imaging, however these acquisitions suffer local large distortions. We propose a Block-Matching registration method based on a the acquisition of images with opposite phase-encoding directions (PED). This technique specially designs for Echo-Planar Images (EPI), but which could be generic, robustly correct images and provide a deformation field. This field is applicable to an entire DWI series from only one reversed b 0 allowing distortion correction with a minimal time acquisition cost. This registration algorithm has been validated both on a phantom data set and on in-vivo data and is available in our source medical image processing toolbox Anima. From these diffusion images, we are able to construct multi-compartments models (MCM) which could represented complex brain microstructure. We need to do registration, average, create atlas on these MCM to be able to make studies and produce statistic analysis. We propose a general method to interpolate MCM as a simplification problem based on spectral clustering. This technique, which is adaptable for any MCM, has been validated for both synthetic and real data. Then, from a registered dataset, we made analysis at a voxel-level doing statistic on MCM parameters. Specifically design tractography can also be perform to make analysis, following tracks, based on individual compartment. All these tools are designed and used on real data and contribute to the search of biomakers for brain diseases.
512

Matchings Between Point Processes

Jana, Indrajit 06 1900 (has links) (PDF)
No description available.
513

Analysis of 3D color matches for the creation and consumption of video content / Appariement d'images par appariement de couleurs dans un espace 3D pour la création et la consommation de contenus vidéo

Sheikh Faridul, Hasan 06 January 2014 (has links)
L'objectif de cette thèse est de proposer une solution au problème de la constance des couleurs entre les images d'une même scène acquises selon un même point de vue ou selon différents points de vue. Ce problème constitue un défi majeur en vision par ordinateur car d'un point de vue à l'autre, on peut être confronté à des variations des conditions d'éclairage (spectre de l'éclairage, intensité de l'éclairage) et des conditions de prise de vue (point de vue, type de caméra, paramètres d'acquisition tels que focus, exposition, balance des blancs, etc.). Ces variations induisent alors des différences d'apparence couleur entre les images acquises qui touchent soit sur l'ensemble de la scène observée soit sur une partie de celle-ci. Dans cette thèse, nous proposons une solution à ce problème qui permet de modéliser puis de compenser, de corriger, ces variations de couleur à partir d'une méthode basée sur quatre étapes : (1) calcul des correspondances géométriques à partir de points d'intérêt (SIFT et MESR) ; (2) calculs des correspondances couleurs à partir d'une approche locale; (3) modélisation de ces correspondances par une méthode de type RANSAC; (4) compensation des différences de couleur par une méthode polynomiale à partir de chacun des canaux couleur, puis par une méthode d'approximation linéaire conjuguée à une méthode d'estimation de l'illuminant de type CAT afin de tenir compte des intercorrélations entre canaux couleur et des changements couleur dus à l'illuminant. Cette solution est comparée aux autres approches de l'état de l'art. Afin d'évaluer quantitativement et qualitativement la pertinence, la performance et la robustesse de cette solution, nous proposons deux jeux d'images spécialement conçus à cet effet. Les résultats de différentes expérimentations que nous avons menées prouvent que la solution que nous proposons est plus performante que toutes les autres solutions proposées jusqu'alors / The objective of this thesis is to propose a solution to the problem of color consistency between images originate from the same scene irrespective of acquisition conditions. Therefore, we present a new color mapping framework that is able to compensate color differences and achieve color consistency between views of the same scene. Our proposed, new framework works in two phases. In the first phase, we propose a new method that can robustly collect color correspondences from the neighborhood of sparse feature correspondences, despite the low accuracy of feature correspondences. In the second phase, from these color correspondences, we introduce a new, two-step, robust estimation of the color mapping model: first, nonlinear channel-wise estimation; second, linear cross-channel estimation. For experimental assessment, we propose two new image datasets: one with ground truth for quantitative assessment; another, without the ground truth for qualitative assessment. We have demonstrated a series of experiments in order to investigate the robustness of our proposed framework as well as its comparison with the state of the art. We have also provided brief overview, sample results, and future perspectives of various applications of color mapping. In experimental results, we have demonstrated that, unlike many methods of the state of the art, our proposed color mapping is robust to changes of: illumination spectrum, illumination intensity, imaging devices (sensor, optic), imaging device settings (exposure, white balance), viewing conditions (viewing angle, viewing distance)
514

Generalized identity matching in the pigeon: Effects of extended observing- and choice-response requirements.

Hayashi, Yusuke 08 1900 (has links)
Four experimentally naïve white Carneau pigeons learned to match three colors to each other in a variant of an Identity matching-to-sample procedure with an FR20 on samples and a response-initiated FI8-s on comparisons. In Experiment 1, the extent to which subjects were matching on the basis of identity was assessed by presenting, in extinction, test trials comprising novel stimuli serving as the sample (and matching comparison) or as the nonmatching comparison. The results from Experiment 1 suggested intermediate or little to no transfer on the basis of identity. Experiment 2 reassessed transfer on the basis of identity with differential reinforcement on the test trials. Under these conditions, two of the four birds demonstrated substantially better than chance levels of performance. These data imply that while the extended response requirements may be necessary, other procedural aspects may be responsible for generalized identity matching in the pigeon.
515

Algoritmos de casamento de imagens com filtragem adaptativa de outliers / Image matching algorithms with adaptive filtering of outliers.

Jonathan da Silva Ramos 01 December 2016 (has links)
O registro de imagens tem um papel importante em várias aplicações, tais como reconstrução de objetos 3D, reconhecimento de padrões, imagens microscópicas, entre outras. Este registro é composto por três passos principais: (1) seleção de pontos de interesse; (2) extração de características dos pontos de interesse; (3) correspondência entre os pontos de interesse de uma imagem para a outra. Para os passos 1 e 2, algoritmos como SIFT e SURF têm apresentado resultados satisfatórios. Entretanto, para o passo 3 ocorre a presença de outliers, ou seja, pontos de interesse que foram incorretamente correspondidos. Uma única correspondência incorreta leva a um resultado final indesejável. Os algoritmos para remoção de outliers (consenso) possuem um alto custo computacional, que cresce à medida que a quantidade de outliers aumenta. Com o objetivo de reduzir o tempo de processamento necessário por esses algoritmos, o algoritmo FOMP(do inglês, Filtering out Outliers from Matched Points), foi proposto e desenvolvido neste trabalho para realizar a filtragem de outliers no conjunto de pontos inicialmente correspondidos. O método FOMP considera cada conjunto de pontos como um grafo completo, no qual os pesos são as distâncias entre os pontos. Por meio da soma de diferenças entre os pesos das arestas, o vértice que apresentar maior valor é removido. Para validar o método FOMP, foram realizados experimentos utilizando quatro bases de imagens. Cada base apresenta características intrínsecas: (a) diferenças de rotação zoom da câmera; (b) padrões repetitivos, os quais geram duplicidade nos vetores de características; (c) objetos de formados, tais como plásticos, papéis ou tecido; (d) transformações afins (diferentes pontos de vista). Os experimentos realizados mostraram que o filtro FOMP remove mais de 65% dos outliers, enquanto mantém cerca de 98%dos inliers. A abordagem proposta mantém a precisão dos métodos de consenso, enquanto reduz o tempo de processamento pela metade para os métodos baseados em grafos. / Image matching plays a major role in many applications, such as pattern recognition and microscopic imaging. It encompasses three steps: 1) interest point selection; 2) feature extraction from each point; 3) feature point matching. For steps 1 and 2, traditional interest point detectors/ extractors have worked well. However, for step 3 even a few points incorrectly matched (outliers), might lead to an undesirable result. State-of-the-art consensus algorithms present a high time cost as the number of outlier increases. Aiming at overcoming this problem, we present FOMP, a preprocessing approach, that reduces the number of outliers in the initial set of matched points. FOMP filters out the vertices that present a higher difference among their edges in a complete graph representation of the points. To validate the proposed method, experiments were performed with four image database: (a) variations of rotation or camera zoom; (b) repetitive patterns, which leads to duplicity of features vectors; (c) deformable objects, such as plastics, clothes or papers; (d) affine transformations (different viewpoint). The experimental results showed that FOMP removes more than 65% of the outliers, while keeping over 98% of the inliers. Moreover, the precision of traditional methods is kept, while reducing the processing time of graph based approaches by half.
516

Soustava kamer jako stereoskopický senzor pro měření vzdálenosti v reálném čase / Real-time distance measurement with stereoscopic sensor

Janeček, Martin January 2014 (has links)
Project shows calibration stereoscopic sensor. Also describes basic methods stereo-corespodation using library OpenCV. Project contains calculations of disparity maps on CPU or graphic card(using library OpenCL).
517

Soustava kamer jako stereoskopický senzor pro měření vzdálenosti v reálném čase / Real-time distance measurement with stereoscopic sensor

Janeček, Martin January 2014 (has links)
Project shows calibration stereoscopic sensor. Also describes basic methods stereo-corespodation using library OpenCV. Project contains calculations of disparity maps on CPU or graphic card (using library OpenCL).
518

Realisierung einer Datenbank zur Erfassung von PA-Fragebögen und Matching zur ICF

Chill, Simon 23 January 2018 (has links)
Eine zentrale Ablage für PA-Fragebögen würde den Zugriff deutlich erleichtern. Durch umfassende Suchbedingungen kann eine weitere Verbesserung der Auswahl erfolgen. Darin liegt der Ursprung dieser Arbeit. Eine zentrale Ablage für Fragebögen zu schaffen würde die Auswahl des richtigen Fragebogens deutlich erleichtern und beschleunigen. Um die Personen, welche mit Fragebögen arbeiten zu unterstützen und die Arbeit mit Fragebögen zu erleichtern ist eine zentrale Übersicht von großem Vorteil.
519

Automatic Generation of Trace Links in Model-driven Software Development

Grammel, Birgit 17 February 2014 (has links)
Traceability data provides the knowledge on dependencies and logical relations existing amongst artefacts that are created during software development. In reasoning over traceability data, conclusions can be drawn to increase the quality of software. The paradigm of Model-driven Software Engineering (MDSD) promotes the generation of software out of models. The latter are specified through different modelling languages. In subsequent model transformations, these models are used to generate programming code automatically. Traceability data of the involved artefacts in a MDSD process can be used to increase the software quality in providing the necessary knowledge as described above. Existing traceability solutions in MDSD are based on the integral model mapping of transformation execution to generate traceability data. Yet, these solutions still entail a wide range of open challenges. One challenge is that the collected traceability data does not adhere to a unified formal definition, which leads to poorly integrated traceability data. This aggravates the reasoning over traceability data. Furthermore, these traceability solutions all depend on the existence of a transformation engine. However, not in all cases pertaining to MDSD can a transformation engine be accessed, while taking into account proprietary transformation engines, or manually implemented transformations. In these cases it is not possible to instrument the transformation engine for the sake of generating traceability data, resulting in a lack of traceability data. In this work, we address these shortcomings. In doing so, we propose a generic traceability framework for augmenting arbitrary transformation approaches with a traceability mechanism. To integrate traceability data from different transformation approaches, our approach features a methodology for augmentation possibilities based on a design pattern. The design pattern supplies the engineer with recommendations for designing the traceability mechanism and for modelling traceability data. Additionally, to provide a traceability mechanism for inaccessible transformation engines, we leverage parallel model matching to generate traceability data for arbitrary source and target models. This approach is based on a language-agnostic concept of three similarity measures for matching. To realise the similarity measures, we exploit metamodel matching techniques for graph-based model matching. Finally, we evaluate our approach according to a set of transformations from an SAP business application and the domain of MDSD.
520

Accelerating Reverse Engineering Image Processing Using FPGA

Harris, Matthew Joshua 10 May 2019 (has links)
No description available.

Page generated in 0.1073 seconds