• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 6
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 22
  • 15
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Rigid and Non-rigid Point-based Medical Image Registration

Parra, Nestor Andres 13 November 2009 (has links)
The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.
52

Simulation of Physiological Signals using Wavelets

Bhojwani, Soniya Naresh January 2007 (has links)
No description available.
53

Multi-resolution physiological modeling for the analysis of cardiovascular pathologies / Modélisation physiologique multirésolution pour l'analyse des pathologies cardiovasculaires

Ojeda Avellaneda, David 10 December 2013 (has links)
Cette thèse présente trois apports principaux dans le cadre de la modélisation et la simulation des systèmes physiologiques. Le premier apport est la formalisation des aspects qui concernent la modélisation multi-formalisme et multi-résolution. Le deuxième est la présentation et amélioration d'une librairie et un cadre général de modélisation et simulation qui intègre un ensemble d'outils pour aider la définition, l'analyse, l'utilisation et le partage des modèles mathématiques complexes. Le troisième apport est l'application du cadre de modélisation pour améliorer le diagnostic et les stratégies thérapeutiques des applications cliniques concernant le système cardiovasculaire, notamment l'insuffisance cardiaque associée à l'hypertension et les maladies coronariennes. Des applications potentielles associées à la thérapie de resynchronisation cardiaque et l'apnée-bradycardie du nouveau-né prématuré ont été aussi présentées. Ces cas d'étude incluent une intégration d'un cœur pulsatile dans un modèle globale du système cardiovasculaire qui prend en compte i) les mécanismes de régulation à long terme, ainsi que la représentation d'un type d'insuffisance cardiaque, ii) l'analyse de l'hémodynamique coronarienne et sa circulation collatérale pour des patients atteints d'une maladie tri-tronculaire et qui subissent une chirurgie de pontage aorto-coronarien, iii) l'intégration du système électrique cardiaque et son comportement mécanique pour l'optimisation du délai atrio-ventriculaire d'un stimulateur cardiaque, et iv) l'estimation basée sur modèles de l'activité des voies vagale et sympathique du baroreflèxe en période néonatale. / This thesis presents three main contributions in the context of modeling and simulation of physiological systems. The first one is a formalization of the methodology involved in multi-formalism and multi-resolution modeling. The second one is the presentation and improvement of a modeling and simulation framework integrating a range of tools that help the definition, analysis, usage and sharing of complex mathematical models. The third contribution is the application of this modeling framework to improve diagnostic and therapeutic strategies for clinical applications involving the cardiovascular system: hypertension-based heart failure (HF) and coronary artery disease (CAD). A prospective application in cardiac resynchronization therapy (CRT) is also presented, which also includes a model of the therapy. Finally, a final application is presented for the study of the baroreflex responses in the newborn lamb. These case studies include the integration of a pulsatile heart into a global cardiovascular model that captures the short and long term regulation of the cardiovascular system with the representation of heart failure, the analysis of coronary hemodynamics and collateral circulation of patients with triple-vessel disease enduring a coronary artery bypass graft surgery, the construction of a coupled electrical and mechanical cardiac model for the optimization of atrio ventricular and intraventricular delays of a biventricular pacemaker, and a model-based estimation of sympathetic and vagal responses of premature newborn lambs.
54

[en] A PREDICTIVE CACHE SYSTEM FOR REAL-TIME PROCESSING OF LARGE 2D GRAPHICAL DADA / [pt] UM SISTEMA DE CACHE PREDITIVO PARA O PROCESSAMENTO EM TEMPO-REAL DE GRANDES VOLUMES DE DADOS GRÁFICOS

SERGIO ESTEVAO MACHADO LISBOA PINHEIRO 31 March 2004 (has links)
[pt] Atualmente, diversas áreas de Computação Gráfica necessitam processar uma grande quantidade de dados. Para visualizar em tempo-real esses dados é necessário lidar com dois tipos de problema. O primeiro está relacionado com o pouco tempo destinado para realizar os cálculos no processo de síntese de imagem. O segundo problema surge da capacidade limitada de armazenamento dos dispositivos de alta velocidade, como memórias RAM e de textura. Para resolver o primeiro problema, este trabalho utilizou a técnica de multi-resolução para representar os dados gráficos. A representação em multi-resolução permite que a quantidade de dados processada durante a visualização seja praticamente constante. O segundo problema foi resolvido a partir de um sistema de gerenciamento de memória preditivo baseado no modelo de memória virtual. Este trabalho propõe uma arquitetura que permite que qualquer tipo de dispositivo de armazenamento seja inserido. Os dispositivos estão organizados em seqüência. O funcionamento do sistema consiste em reservar um espaço de memória em cada dispositivo e gerenciar esse espaço de forma otimizada. O sistema de predição tem a finalidade de carregar antecipadamente os dados que serão provavelmente utilizados pela aplicação num futuro próximo. Este trabalho propõe um algoritmo de predição adaptativo específico para o problema de visualização. Este algoritmo explora as informações sobre as variações dos parâmetros da câmera e as informações sobre a taxa de transferência de dados, que são usadas para decidir o que deve ser carregado. As informações dos parâmetros da câmera ajudam a determinar os dados que possivelmente serão utilizados pela aplicação. A informação da taxa de transmissão é utilizada para decidir qual o nível de resolução desses dados que devem ser carregados antecipadamente para os dispositivos de alta velocidade. O sistema de gerenciamento de memória preditivo foi testado em aplicações de visualização de imagens de satélite e panoramas virtuais,em tempo-real. / [en] Nowadays, many areas of computer graphics need to process a huge amount of data. In order to visualize the data in realtime time, it is necessary to solve two different problems. The first problem is the limited time available to perform rendering. The second one arises from the restricted capacity of storage high-speed memories, like RAM and texture memories. In order to solve the first problem, this work has used multi-resolution techniques. The multi-resolution representation allows the application to work with a constant amount of data during the rendering process. The second problem has been solved by a predictive management memory system based on the virtual memory model. This work proposes an architecture that allows any storage device to be incorporated in the system. Devices are organized sequentially. The heart of the system consists in allocating an area of memory for each device and managing this space optimally. The predictive system aims to load in advance. The data that will probably be used by the application in the near future. This work proposes a specific adaptative prediction algorithm for the visualization problem. This algorithm exploits the information about the camera parameter variations as well as the data transfer rate, in order to decide what should be loaded. The camera parameters are used to determine which data will possibly be used by the application. The transfer rate is used to decide which resolution level of the data should be loaded to the high- speed devices, in advance. The predictive memory management system has been tested for real-time visualization of satellite images and virtual panoramas.
55

A multi-resolution discontinuous Galerkin method for rapid simulation of thermal systems

Gempesaw, Daniel 29 August 2011 (has links)
Efficient, accurate numerical simulation of coupled heat transfer and fluid dynamics systems continues to be a challenge. Direct numerical simulation (DNS) packages like FLU- ENT exist and are sufficient for design and predicting flow in a static system, but in larger systems where input parameters can change rapidly, the cost of DNS increases prohibitively. Major obstacles include handling the scales of the system accurately - some applications span multiple orders of magnitude in both the spatial and temporal dimensions, making an accurate simulation very costly. There is a need for a simulation method that returns accurate results of multi-scale systems in real time. To address these challenges, the Multi- Resolution Discontinuous Galerkin (MRDG) method has been shown to have advantages over other reduced order methods. Using multi-wavelets as the local approximation space provides an inherently efficient method of data compression, while the unique features of the Discontinuous Galerkin method make it well suited to composition with wavelet theory. This research further exhibits the viability of the MRDG as a new approach to efficient, accurate thermal system simulations. The development and execution of the algorithm will be detailed, and several examples of the utility of the MRDG will be included. Comparison between the MRDG and the "vanilla" DG method will also be featured as justification of the advantages of the MRDG method.
56

Modèles de classification hiérarchiques d'images satellitaires multi-résolutions, multi-temporelles et multi-capteurs. Application aux désastres naturels / Hierarchical joint classification models for multi-resolution, multi-temporal and multi-sensor remote sensing images. Application to natural disasters

Hedhli, Ihsen 18 March 2016 (has links)
Les moyens mis en œuvre pour surveiller la surface de la Terre, notamment les zones urbaines, en cas de catastrophes naturelles telles que les inondations ou les tremblements de terre, et pour évaluer l’impact de ces événements, jouent un rôle primordial du point de vue sociétal, économique et humain. Dans ce cadre, des méthodes de classification précises et efficaces sont des outils particulièrement importants pour aider à l’évaluation rapide et fiable des changements au sol et des dommages provoqués. Étant données l’énorme quantité et la variété des données Haute Résolution (HR) disponibles grâce aux missions satellitaires de dernière génération et de différents types, telles que Pléiades, COSMO-SkyMed ou RadarSat-2 la principale difficulté est de trouver un classifieur qui puisse prendre en compte des données multi-bande, multi-résolution, multi-date et éventuellement multi-capteur tout en gardant un temps de calcul acceptable. Les approches de classification multi-date/multi-capteur et multi-résolution sont fondées sur une modélisation statistique explicite. En fait, le modèle développé consiste en un classifieur bayésien supervisé qui combine un modèle statistique conditionnel par classe intégrant des informations pixel par pixel à la même résolution et un champ de Markov hiérarchique fusionnant l’information spatio-temporelle et multi-résolution, en se basant sur le critère des Modes Marginales a Posteriori (MPM en anglais), qui vise à affecter à chaque pixel l’étiquette optimale en maximisant récursivement la probabilité marginale a posteriori, étant donné l’ensemble des observations multi-temporelles ou multi-capteur / The capabilities to monitor the Earth's surface, notably in urban and built-up areas, for example in the framework of the protection from environmental disasters such as floods or earthquakes, play important roles in multiple social, economic, and human viewpoints. In this framework, accurate and time-efficient classification methods are important tools required to support the rapid and reliable assessment of ground changes and damages induced by a disaster, in particular when an extensive area has been affected. Given the substantial amount and variety of data available currently from last generation very-high resolution (VHR) satellite missions such as Pléiades, COSMO-SkyMed, or RadarSat-2, the main methodological difficulty is to develop classifiers that are powerful and flexible enough to utilize the benefits of multiband, multiresolution, multi-date, and possibly multi-sensor input imagery. With the proposed approaches, multi-date/multi-sensor and multi-resolution fusion are based on explicit statistical modeling. The method combines a joint statistical model of multi-sensor and multi-temporal images through hierarchical Markov random field (MRF) modeling, leading to statistical supervised classification approaches. We have developed novel hierarchical Markov random field models, based on the marginal posterior modes (MPM) criterion, that support information extraction from multi-temporal and/or multi-sensor information and allow the joint supervised classification of multiple images taken over the same area at different times, from different sensors, and/or at different spatial resolutions. The developed methods have been experimentally validated with complex optical multispectral (Pléiades), X-band SAR (COSMO-Skymed), and C-band SAR (RadarSat-2) imagery taken from the Haiti site
57

Algorithms For Geospatial Analysis Using Multi-Resolution Remote Sensing Data

Uttam Kumar, * 03 1900 (has links) (PDF)
Geospatial analysis involves application of statistical methods, algorithms and information retrieval techniques to geospatial data. It incorporates time into spatial databases and facilitates investigation of land cover (LC) dynamics through data, model, and analytics. LC dynamics induced by human and natural processes play a major role in global as well as regional scale patterns, which in turn influence weather and climate. Hence, understanding LC dynamics at the local / regional as well as at global levels is essential to evolve appropriate management strategies to mitigate the impacts of LC changes. This can be captured through the multi-resolution remote sensing (RS) data. However, with the advancements in sensor technologies, suitable algorithms and techniques are required for optimal integration of information from multi-resolution sensors which are cost effective while overcoming the possible data and methodological constraints. In this work, several per-pixel traditional and advanced classification techniques have been evaluated with the multi-resolution data along with the role of ancillary geographical data on the performance of classifiers. Techniques for linear and non-linear un-mixing, endmember variability and determination of spatial distribution of class components within a pixel have been applied and validated on multi-resolution data. Endmember estimation method is proposed and its performance is compared with manual, semi-automatic and fully automatic methods of endmember extraction. A novel technique - Hybrid Bayesian Classifier is developed for per pixel classification where the class prior probabilities are determined by un-mixing a low spatial-high spectral resolution multi-spectral data while posterior probabilities are determined from the training data obtained from ground, that are assigned to every pixel in a high spatial-low spectral resolution multi-spectral data in Bayesian classification. These techniques have been validated with multi-resolution data for various landscapes with varying altitudes. As a case study, spatial metrics and cellular automata based models applied for rapidly urbanising landscape with moderate altitude has been carried out.
58

Modèles de représentation multi-résolution pour le rendu photo-réaliste de matériaux complexes

Baril, Jérôme 11 January 2010 (has links)
The emergence of digital capture devices have enabled the developmentof 3D acquisition to scan the properties of a real object : its shape and itsappearance. This process provides a dense and accurate representation of realobjects and allows to avoid the costly process of physical simulation to modelan object. Thus, the issues have evolved and are no longer focus on modelingthe characteristics of a real object only but on the treatment of data fromacquisition to integrate a copy of reality in a process of image synthesis. In this thesis, we propose new representations for appearance functions from the acquisition with the aim of defining a set of multicale models of low complexity in size working in real time on the today's graphics hardware / L'émergence des périphériques de capture numériques ont permis le développement de l'acquisition 3D pour numériser les propriétés d'un objet réel : sa forme et son apparence. Ce processus fournit une représentation dense et précise d'objets réels et permet de s'abstraire d'un processus des imulation physique coûteux pour modéliser un objet. Ainsi, les problématiquesont évolué et portent non plus uniquement sur la modélisation descaractéristiques d'un objet réel mais sur les traitements de données issues de l'acquisition pour intégrer une copie de la réalité dans un processus de synthèse d'images. Dans ces travaux de thèse, nous proposons de nouvelles représentations pour les fonctions d'apparence issues de l'acquisition dont le but est de définir un ensemble de modèles multi-échelles, de faible complexité en taille, capable d'e^tre visualisé en temps réel sur le matériel graphique actuel.
59

Contribution à l'analyse et à la recherche d'information en texte intégral : application de la transformée en ondelettes pour la recherche et l'analyse de textes / Contribution in analysis and information retrieval in text : application of wavelets transforms in information retrieval

Smail, Nabila 27 January 2009 (has links)
L’objet des systèmes de recherche d’informations est de faciliter l’accès à un ensemble de documents, afin de permettre à l’utilisateur de retrouver ceux qui sont pertinents, c'est-à-dire ceux dont le contenu correspond le mieux à son besoin en information. La qualité des résultats de la recherche se mesure en comparant les réponses du système avec les réponses idéales que l'utilisateur espère recevoir. Plus les réponses du système correspondent à celles que l'utilisateur espère, plus le système est jugé performant. Les premiers systèmes permettaient d’effectuer des recherches booléennes, c’est à dire, des recherches ou seule la présence ou l’absence d’un terme de la requête dans un texte permet de le sélectionner. Il a fallu attendre la fin des années 60, pour que l’on applique le modèle vectoriel aux problématiques de la recherche d’information. Dans ces deux modèles, seule la présence, l’absence, ou la fréquence des mots dans le texte est porteuse d’information. D’autres systèmes de recherche d’information adoptent cette approche dans la modélisation des données textuelles et dans le calcul de la similarité entre documents ou par rapport à une requête. SMART (System for the Mechanical Analysis and Retrieval of Text) [4] est l’un des premiers systèmes de recherche à avoir adopté cette approche. Plusieurs améliorations des systèmes de recherche d’information utilisent les relations sémantiques qui existent entre les termes dans un document. LSI (Latent Semantic Indexing) [5], par exemple réalise ceci à travers des méthodes d’analyse qui mesurent la cooccurrence entre deux termes dans un même contexte, tandis que Hearst et Morris [6] utilisent des thésaurus en ligne pour créer des liens sémantiques entre les termes dans un processus de chaines lexicales. Dans ces travaux nous développons un nouveau système de recherche qui permet de représenter les données textuelles par des signaux. Cette nouvelle forme de représentation nous permettra par la suite d’appliquer de nombreux outils mathématiques de la théorie du signal, tel que les Transformées en ondelettes et jusqu’a aujourd’hui inconnue dans le domaine de la recherche d’information textuelle / The object of information retrieval systems is to make easy the access to documents and to allow a user to find those that are appropriate. The quality of the results of research is measured by comparing the answers of the system with the ideal answers that the user hopes to find. The system is competitive when its answers correspond to those that the user hopes. The first retrieval systems performing Boolean researches, in other words, researches in which only the presence or the absence of a term of a request in a text allow choosing it. It was necessary to wait for the end of the sixties to apply the vector model in information retrieval. In these two models, alone presence, absence, or frequency of words in the text is holder of information. Several Information Retrieval Systems adopt a flat approach in the modeling of data and in the counting of similarity between documents or in comparison with a request. We call this approach ‘bag of words ’. These systems consider only presence, absence or frequency of appearance of terms in a document for the counting of its pertinence, while Hearst and Morris [6] uses online thesaurus to create semantic links between terms in a process of lexical chains. In this thesis we develop a new retrieval system which allows representing textual data by signals. This new form of presentation will allow us, later, to apply numerous mathematical tools from the theory of the signal such as Wavelets Transforms, well-unknown nowadays in the field of the textual information retrieval
60

Intégration et optimisation des grilles régulières de points dans une architecture SOLAP relationnelle / Integration and optimization of regular grids of points analysis in the relational SOLAP architecture

Zaamoune, Mehdi 08 January 2015 (has links)
Les champs continus sont des types de représentations spatiales utilisées pour modéliser des phénomènes tels que la température, la pollution ou l’altitude. Ils sont définis selon une fonction de mapping f qui affecte une valeur du phénomène étudié à chaque localisation p du domaine d’étude. Par ailleurs, la représentation des champs continus à différentes échelles ou résolutions est souvent essentielle pour une analyse spatiale efficace. L’avantage des champs continus réside dans le niveau de détails généré par la continuité, ainsi que la qualité de l’analyse spatiale fournie par la multi-résolution. L’inconvénient de ce type de représentations dans l’analyse spatio-multidimensionnelle est le coût des performances d’analyse et de stockage. Par ailleurs, les entrepôts de données spatiaux et les systèmes OLAP spatiaux (EDS et SOLAP) sont des systèmes d’aide à la décision qui permettent l’analyse spatio-multidimensionnelle de grands volumes de données spatiales et non spatiales. L’analyse des champs continus dans l’architecture SOLAP représente un défi de recherche intéressant. Différents travaux se sont intéressés à l’intégration de ce type de représentations dans le système SOLAP. Cependant, celle-ci est toujours au stade embryonnaire. Cette thèse s’intéresse à l’intégration des champs continus incomplets représentés par une grille régulière de points dans l’analyse spatio-multidimensionnelle. Cette intégration dans le système SOLAP implique que l’analyse des champs continus doit supporter : (i) les opérateurs OLAP classiques, (ii) la vue continue des données spatiales, (iii) les opérateurs spatiaux (slice spatial) et (iv) l’interrogation des données à différentes résolutions prédéfinies. Dans cette thèse nous proposons différentes approches pour l’analyse des champs continus dans le SOLAP à différents niveaux de l’architecture relationnelle, de la modélisation conceptuelle à l’optimisation des performances de calcul. Nous proposons un modèle logique FISS qui permet d’optimiser les performances d’analyse à multi-résolution en se basant sur des méthodes d’interpolation. Puis, nous exposons une méthodologie basée sur la méthode d’échantillonnage du Clustering, qui permet d’optimiser les opérations d’agrégation des grilles régulières de points dans l’architecture SOLAP relationnelle en effectuant une estimation des résultats. / Continuous fields are types of spatial representations used to model phenomena such as temperature, pollution or altitude. They are defined according to a mapping function f that assigns a value of the studied phenomenon to each p location of the studied area. Moreover, the representation of continuous fields at different scales or resolutions is often essential for effective spatial analysis. The advantage of continuous fields is the level of details generated by the continuity of the spatial data, and the quality of the spatial analysis provided by the multi-resolution. The downside of this type of spatial representations in the multidimensionnal analysis is the high cost of analysis and storage performances. Moreover, spatial data warehouses and spatial OLAP systems (EDS and SOLAP) are decision support systems that enable multidimensional spatial analysis of large volumes of spatial and non-spatial data. The analysis of continuous fields in SOLAP architecture represents an interesting research challenge. Various studies have focused on the integration of such representations in SOLAP system. However, this integration still at an early stage. Thus, this thesis focuses on the integration of incomplete continuous fields represented by a regular grid of points in the spatio-multidimensional analysis. This integration in the SOLAP system involves that the analysis of continuous fields must support:(i) conventional OLAP operators, (ii) Continuous spatial data, (iii) spatial operators (spatial slice), and (iv) querying data at different predefined levels of resolutions. In this thesis we propose differents approaches for the analysis of continuous fields in SOLAP system at different levels of the relational architecture (from the conceptual modeling to the optimization of computing performance). We propose a logical model FISS to optimize the performances of the multi-resolution analysis, based on interpolation methods. Then, we present a new methodology based on the Clustering sampling method, to optimize aggregation operations on regular grids of points in the relational SOLAP architecture.

Page generated in 0.0963 seconds