• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 92
  • 16
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 282
  • 83
  • 64
  • 39
  • 38
  • 32
  • 32
  • 32
  • 30
  • 27
  • 25
  • 25
  • 24
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Anatomy of the SIFT method / L'Anatomie de la méthode SIFT

Rey Otero, Ives 26 September 2015 (has links)
Cette thèse est une analyse approfondie de la méthode SIFT, la méthode de comparaison d'images la plus populaire. En proposant un échantillonnage du scale-space Gaussien, elle est aussi la première méthode à mettre en pratique la théorie scale-space et faire usage de ses propriétés d'invariance aux changements d'échelles.SIFT associe à une image un ensemble de descripteurs invariants aux changements d'échelle, invariants à la rotation et à la translation. Les descripteurs de différentes images peuvent être comparés afin de mettre en correspondance les images. Compte tenu de ses nombreuses applications et ses innombrables variantes, étudier un algorithme publié il y a une décennie pourrait surprendre. Il apparaît néanmoins que peu a été fait pour réellement comprendre cet algorithme majeur et établir de façon rigoureuse dans quelle mesure il peut être amélioré pour des applications de haute précision. Cette étude se découpe en quatre parties. Le calcul exact du scale-space Gaussien, qui est au cœur de la méthode SIFT et de la plupart de ses compétiteurs, est l'objet de la première partie.La deuxième partie est une dissection méticuleuse de la longue chaîne de transformations qui constitue la méthode SIFT. Chaque paramètre y est documenté et son influence analysée. Cette dissection est aussi associé à une publication en ligne de l'algorithme. La description détaillée s'accompagne d'un code en C ainsi que d'une plateforme de démonstration permettant l'analyse par le lecteur de l'influence de chaque paramètre. Dans la troisième partie, nous définissons un cadre d'analyse expérimental exact dans le but de vérifier que la méthode SIFT détecte de façon fiable et stable les extrema du scale-space continue à partir de la grille discrète. En découlent des conclusions pratiques sur le bon échantillonnage du scale-space Gaussien ainsi que sur les stratégies de filtrage de points instables. Ce même cadre expérimental est utilisé dans l'analyse de l'influence de perturbations dans l'image (aliasing, bruit, flou). Cette analyse démontre que la marge d'amélioration est réduite pour la méthode SIFT ainsi que pour toutes ses variantes s'appuyant sur le scale-space pour extraire des points d'intérêt. L'analyse démontre qu'un suréchantillonnage du scale-space permet d'améliorer l'extraction d'extrema et que se restreindre aux échelles élevées améliore la robustesse aux perturbations de l'image.La dernière partie porte sur l'évaluation des performances de détecteurs de points. La métrique de performance la plus généralement utilisée est la répétabilité. Nous démontrons que cette métrique souffre pourtant d'un biais et qu'elle favorise les méthodes générant des détections redondantes. Afin d'éliminer ce biais, nous proposons une variante qui prend en considération la répartition spatiale des détections. A l'aide de cette correction nous réévaluons l'état de l'art et montrons que, une fois la redondance des détections prise en compte, la méthode SIFT est meilleure que nombre de ses variantes les plus modernes. / This dissertation contributes to an in-depth analysis of the SIFT method. SIFT is the most popular and the first efficient image comparison model. SIFT is also the first method to propose a practical scale-space sampling and to put in practice the theoretical scale invariance in scale space. It associates with each image a list of scale invariant (also rotation and translation invariant) features which can be used for comparison with other images. Because after SIFT feature detectors have been used in countless image processing applications, and because of an intimidating number of variants, studying an algorithm that was published more than a decade ago may be surprising. It seems however that not much has been done to really understand this central algorithm and to find out exactly what improvements we can hope for on the matter of reliable image matching methods. Our analysis of the SIFT algorithm is organized as follows. We focus first on the exact computation of the Gaussian scale-space which is at the heart of SIFT as well as most of its competitors. We provide a meticulous dissection of the complex chain of transformations that form the SIFT method and a presentation of every design parameter from the extraction of invariant keypoints to the computation of feature vectors. Using this documented implementation permitting to vary all of its own parameters, we define a rigorous simulation framework to find out if the scale-space features are indeed correctly detected by SIFT, and which sampling parameters influence the stability of extracted keypoints. This analysis is extended to see the influence of other crucial perturbations, such as errors on the amount of blur, aliasing and noise. This analysis demonstrates that, despite the fact that numerous methods claim to outperform the SIFT method, there is in fact limited room for improvement in methods that extract keypoints from a scale-space. The comparison of many detectors proposed in SIFT competitors is the subject of the last part of this thesis. The performance analysis of local feature detectors has been mainly based on the repeatability criterion. We show that this popular criterion is biased toward methods producing redundant (overlapping) descriptors. We therefore propose an amended evaluation metric and use it to revisit a classic benchmark. For the amended repeatability criterion, SIFT is shown to outperform most of its more recent competitors. This last fact corroborates the unabating interest in SIFT and the necessity of a thorough scrutiny of this method.
212

Recalage 3D/2D d'images pour le traitement endovasculaire des dissections aortiques. / 3D/2D Image registration for endovascular treatment of aortic dissections

Lubniewski, Pawel 10 December 2014 (has links)
Nous présentons dans cette étude nos travaux concernant le recalage 3D/2D d'images de dissection aortique. Son but est de de proposer une visualisation de données médicales, qui pourra servir dans le contexte de l'assistance peropératoire durant les procédures endovasculaires.Pour effectuer cette tâche, nous avons proposé un modèle paramétrique de l'aorte, appelé enveloppe tubulaire. Il sert à exprimer la forme globale et les déformations de l'aorte, à l'aide d'un nombre minimal de paramètres.L'enveloppe tubulaire est utilisée par les algorithmes de recalage proposés dans cette étude.Notre méthode originale consiste à proposer un recalage par calcul direct de la transformation entre image 2D, i.e. sans procéssus d'optimisation, et est appelée recalage par ITD .Les descripteurs, que nous avons définis pour le cas des images d'aorte, permettent de trouver rapidement un alignement grossier des données. Nous proposons également l'extension de notre approche pour la mise en correspondance des images 3Det 2D.La chaîne complète du recalage 3D/2D, que nous présentons dans ce document, est composée de la technique ITD et de méthodes précises iconiques et hybrides. L'intégration de notre algorithme basé sur les descripteurs en tant qu'étape d'initialisation réduit le temps de calcul nécessaire et augmente l'efficacité du recalage, par rapport aux approches classiques.Nous avons testé nos méthodes avec des images médicales, issues de patients trîtés par procédures endovasculaires. Les résultats ont été vérifiés par les spécialistes cliniques et ont été jugés satisfaisants; notre chaine de recalage pourrait ainsi être exploitée dans les salles d'interventions à l'avenir. / In this study, we present our works related to 3D/2D image registrationfor aorti dissition. Its aim is to propose a visualization of medial datawhih an be used by physians during endovas ular proedures.For this purpose, we have proposed a parametrimodel of aorta, alleda Tubular Envelope. It is used to express the global shape and deformationsof the aorta, by a minimal number of parameters. The tubular envelope isused in our image registration algorithms.The registration by ITD (Image Transformation Descriptors) is our ori-ginal method of image alignment : itomputes the rigid 2D transformation between data sets diretly, without any optimization process.We provide thedefinition of this method, as well as the proposition of several descriptors' formulae, in the base of images of aorta. The technique allows us to quickly and a poarse alignment between data. We also propose the extension of theoriginal approach for the registration of 3D and 2D images.The complete chain of 3D/2D image registration techniques, proposedin this document, consists of the ITD stage, followed by an intensity basedhybrid method. The use of our 3D/2D algorithm, based on the image trans-formation descriptors as an initialization phase, reduces the computing timeand improves the efficiency of the presented approach.We have tested our registration methods for the medical images of several patients after endovasular treatment. Results have been approved by our clinical specialists and our approach.We have tested our registration methods for the medical images of several patients after endovascular treatment. Results have been approved by our clinical specialists and our approach may appear in the intervention rooms in the futur.
213

An Isometry-Invariant Spectral Approach for Macro-Molecular Docking

De Youngster, Dela January 2013 (has links)
Proteins and the formation of large protein complexes are essential parts of living organisms. Proteins are present in all aspects of life processes, performing a multitude of various functions ranging from being structural components of cells, to facilitating the passage of certain molecules between various regions of cells. The 'protein docking problem' refers to the computational method of predicting the appropriate matching pair of a protein (receptor) with respect to another protein (ligand), when attempting to bind to one another to form a stable complex. Research shows that matching the three-dimensional (3D) geometric structures of candidate proteins plays a key role in determining a so-called docking pair, which is one of the key aspects of the Computer Aided Drug Design process. However, the active sites which are responsible for binding do not always present a rigid-body shape matching problem. Rather, they may undergo sufficient deformation when docking occurs, which complicates the problem of finding a match. To address this issue, we present an isometry-invariant and topologically robust partial shape matching method for finding complementary protein binding sites, which we call the ProtoDock algorithm. The ProtoDock algorithm comes in two variations. The first version performs a partial shape complementarity matching by initially segmenting the underlying protein object mesh into smaller portions using a spectral mesh segmentation approach. The Heat Kernel Signature (HKS), the underlying basis of our shape descriptor, is subsequently computed for the obtained segments. A final descriptor vector is constructed from the Heat Kernel Signatures and used as the basis for the segment matching. The three different descriptor methods employed are, the accepted Bag of Features (BoF) technique, and our two novel approaches, Closest Medoid Set (CMS) and Medoid Set Average (MSA). The second variation of our ProtoDock algorithm aims to perform the partial matching by utilizing the pointwise HKS descriptors. The use of the pointwise HKS is mainly motivated by the suggestion that, at adequate times, the Heat Kernel Signature of a point on a surface sufficiently describes its neighbourhood. Hence, the HKS of a point may serve as the representative descriptor of its given region of which it forms a part. We propose three (3) sampling methods---Uniform, Random, and Segment-based Random sampling---for selecting these points for the partial matching. Random and Segment-based Random sampling both prove superior to the Uniform sampling method. Our experimental results, run against the Protein-Protein Benchmark 4.0, demonstrate the viability of our approach, in that, it successfully returns known binding segments for known pairing proteins. Furthermore, our ProtoDock-1 algorithm still still yields good results for low resolution protein meshes. This results in even faster processing and matching times with sufficiently reduced computational requirements when obtaining the HKS.
214

Combinação de descritores locais e globais para recuperação de imagens e vídeos por conteúdo / Local and global descriptors combinations for content image and videos retrieval

Andrade, Felipe dos Santos Pinto de, 1986- 22 August 2018 (has links)
Orientador: Ricardo da Silva Torres, Hélio Pedrini / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T03:54:24Z (GMT). No. of bitstreams: 1 Andrade_FelipedosSantosPintode_M.pdf: 3172421 bytes, checksum: cf06d3683b1828f22508de3f77ed1c00 (MD5) Previous issue date: 2012 / Resumo: Recentemente, a fusão de descritores tem sido usada para melhorar o desempenho de sistemas de busca em tarefas de recuperação de imagens e vídeos. Descritores podem ser globais ou locais, dependendo de como analisam o conteúdo visual. A maioria dos trabalhos existentes tem se concentrado na fusão de um tipo de descritor. Este trabalho objetiva analisar o impacto da combinação de descritores locais e globais. Realiza-se um estudo comparativo de diferentes tipos de descritores e todas suas possíveis combinações. Além disso, investigam-se modelos para extração e a comparação das características globais e locais para recuperação de imagens e vídeos e estuda-se a utilização da técnica de programação genética para combinar esses descritores. Experimentos extensivos baseados em um projeto experimental rigoroso mostram que descritores locais e globais complementam-se quando combinados. Além disso, esta combinação produz resultados superiores aos observados para outras combinações e ao uso dos descritores individualmente / Abstract: Recently, fusion of descriptors has become a trend for improving the performance in image and video retrieval tasks. Descriptors can be global or local, depending on how they analyze visual content. Most of existing works have focused on the fusion of a single type of descriptor. Different from all of them, this work aims at analyzing the impact of combining global and local descriptors. Here, we perform a comparative study of different types of descriptors and all of their possible combinations. Furthermore, we investigate different models for extracting and comparing local and global features of images and videos, and evaluate the use of genetic programming as a suitable alternative for combining local and global descriptors. Extensive experiments following a rigorous experimental design show that global and local descriptors complement each other, such that, when combined, they outperform other combinations or single descriptors / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
215

Os desdobramentos teóricos da proporcionalidade na escola de educação básica

Santos, Mayra Taís Albuquerque 20 July 2018 (has links)
This work has as an initiative to study the proportionality of a broader and natural perspective to the nature with which the classical mathematics was conceived. To do this, we want to broaden the understanding of the content and present it in a broader perspective, going against the simplistic aspect of bibliographies for Basic Education, in order to obtain necessary resources that justify the proposed Didactic Sequence for 9th grade of Elementary School, on Similarity and Volume Measures. To give substance to the proposal the dissertation was divided into four chapters. The first presents a historical construction of proportionality, where we have the Comensurability, The Elements of Euclid and The Theorem of Thales. Chapter 2 focuses on Cavalieri's Principle and Pappus's Theorem, which has a strong relationship with the following chapter dealing with the proportionality of solids, in particular Polyhedra and Solids of Revolution, in order to broaden the perspective of treatment of this content in the practice of teaching mathematics. Finally, chapter 4 brings the proposal of the Didactic Sequence on Similarity and Volume Measures, directed to the 9th year of Elementary School, with 7 activities with application time of 14 hours class; besides some suggestions of contents in which the proportionality already is or can be used strongly. The purpose of the present proposal is to use preexisting mental schemas to construct others, generating the appreciation of meaningful learning by the teacher who teaches mathematics and meaning to the student within mathematics itself, to stimulate the discovery of both new content and its importance in all aspects of social life. / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / IMPA - Instituto de Matemática Pura e Aplicada / Esse trabalho tem como iniciativa estudar a proporcionalidade numa perspectiva mais ampla e natural à natureza com a qual se concebeu a matemática. Para isso, deseja-se ampliar a compreensão do conteúdo e apresentando sob um aspecto mais amplo, indo de encontro com o aspecto simplista das bibliografias para o Ensino Básico, a fim de obter recursos necessários que justifique a Sequência Didática proposta para turmas de 9º ano do Ensino Fundamental, sobre Semelhança e Medidas de Volume. Para dá corpo a proposta a dissertação foi dividida em quatro capítulos. O primeiro apresenta uma construção histórica da proporcionalidade, onde se tem a Comensurabilidade, Os Elementos de Euclides e o Teorema de Tales. O capítulo 2 tem como foco Princípio de Cavalieri e o Teorema de Pappus, que tem forte relação com o capítulo seguinte que trata da proporcionalidade de sólidos, em particular de Poliedros e Sólidos de Revolução, de forma a ampliar a perspectiva de tratamento desse conteúdo na prática de ensino da Matemática. Por fim o capítulo 4 trás a proposta de Sequência Didática sobre Semelhança e Medidas de Volume, direcionada ao 9º ano do Ensino Fundamental, com 7 atividades com tempo de aplicação de 14 horas aula; além de algumas sugestões de conteúdos nos quais a proporcionalidade já é ou pode ser fortemente utilizada. A proposta apresentada tem por finalidade usar esquemas mentais preexistentes para construir outros, gerando a valorização da aprendizagem significativa por parte do docente que ensina matemática e significado para o aluno dentro da própria matemática, para estimular a descoberta tanto de conteúdo novos como a sua importância em todos os aspectos da vida social.
216

Analýza časového vývoje léčených nádorů páteře v CT datech / Time development analysis of treated lesion in spinal CT data

Nohel, Michal January 2021 (has links)
This diploma thesis is focused on time-development analysis of treated lesion in CT data. The theoretical part of the thesis deals with the anatomy, physiology, and pathophysiology of the spine and vertebral bodies. It further describes diagnostic and therapeutic options for the detection and treatment of spinal lesions. It contains an overview of the current state of usage of time-development analysis in oncology. The problems of the available databases are discussed and new databases are created for subsequent analysis. Futhermore, the methodology of time-development analysis according to the shape characterization and the size of the vertebral involvement is proposed. The proposed methodological approaches to feature extraction are applied to the created databases. Their choice and suitability is discussed, including their potential for possible usege in clinical practice of monitoring the development and derivation of characteristic dependences of features on the patient's prognosis.
217

Computational Methods for Protein Structure Comparison and Analysis

Xusi Han (8797445) 05 May 2020 (has links)
Proteins are involved in almost all functions in a living cell, and functions of proteins are realized by their tertiary structures. Protein three-dimensional structures can be solved by multiple experimental methods, but computational approaches serve as an important complement to experimental methods for comparing and analyzing protein structures. Protein structure comparison allows the transfer of knowledge about known proteins to a novel protein and plays an important role in function prediction. Obtaining a global perspective of the variety and distribution of protein structures also lays a foundation for our understanding of the building principle of protein structures. This dissertation introduces our computational method to compare protein 3D structures and presents a novel mapping of protein shapes that represents the variety and the similarities of 3D shapes of proteins and their assemblies. The methods developed in this work can be applied to obtain new biological insights into protein atomic structures and electron density maps.
218

Aplikace solvatačního modelu k popisu retence vybraných látek v kapalinové a plynové chromatografii / Application of solvation mocel to retention description of selected compounds in liquid and gas chromatography

Jirkal, Štěpán January 2016 (has links)
(EN) The solvation model based on LSER was applied to study the retention behaviour of analytes in liquid and gas chromatography. In a first chapter, a retention description of 21 solutes was investigated by using the solvation model in a wide range of mobile phase composition methanol-water and acetonitrile-water. Generally, the retention of aromatic compounds was better described by the solvation model, compared to aliphatic compounds. Effect of the particular analytes used to formulate the LSER model on ability of retention description was studied. Different results of a retention estimation was achieved by using the regression set of compounds including aromatic solutes only or by contrast aliphatic solutes only. The solvation model developed on the basis of oxygen derivatives provided distinct results in comparison to model formulated with nitrogen derivatives only. The second chapter of this work, focused on gas chromatography, dealt with a description of retention of 152 isomers C5-C8 alkenes by the LSER model. The solvation descriptor L was obtained by using two estimation methods Havelec-Ševčík (HS) and Platts-Butina (PB), the descriptor E was calculated according to its definition. Two models for retention description of alkenes were constructed, the HS model and the PB model, derived...
219

Image Recognition Techniques for Optical Head Mounted Displays

Kondreddy, Mahendra 30 January 2017 (has links)
The evolution of technology has led the research into new emerging wearable devices such as the Smart Glasses. This technology provides with new visualization techniques. Augmented Reality is an advanced technology that could significantly ease the execution of much complex operations. Augmented Reality is a combination of both Virtual and Actual Reality, making accessible to the user new tools to safeguard in the transfer of knowledge in several environments and for several processes. This thesis explores the development of an android based image recognition application. The feature point detectors and descriptors are used as they can deal great with the correspondence problems. The selection of best image recognition technique on the smart glasses is chosen based on the time taken to retrieve the results and the amount of power consumed in the process. As the smart glasses are equipped with the limited resources, the selected approach should use low computation on it by making the device operations uninterruptable. The effective and efficient method for detection and recognition of the safety signs from images is selected. The ubiquitous SIFT and SURF feature detectors consume more time and are computationally complex and require very high-level hardware components for processing. The binary descriptors are taken into account as they are light weight and can support low power devices in a much effective style. A comparative analysis is being done on the working of binary descriptors like BRIEF, ORB, AKAZE, FREAK, etc., on the smart glasses based on their performance and the requirements. ORB is the most efficient among the binary descriptors and has been more effective for the smart glasses in terms of time measurements and low power consumption.
220

Identifying Modelling Tasks

Meier, Stefanie 07 May 2012 (has links)
The Comenius Network Project “Developing Quality in Mathematics Education II” funded by the European Commission consists of partners from schools, universities and teacher training centres from eleven European countries. One advantage of the project is the mutual exchange between teachers, teacher trainers and researchers in developing learning material. To support the teachers most effectively the researchers asked the teachers what they wanted the researchers to do. The answer was also a question: How can we identify (good) modelling tasks? A discussion ensued in the research group of this project which resulted in a list of descriptors characterising modelling tasks. This paper focuses on the theoretical background of mathematical modelling and will thereby substantiate the list of descriptors for modelling tasks.

Page generated in 0.0741 seconds