• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2811
  • 994
  • 582
  • 554
  • 541
  • 247
  • 184
  • 115
  • 101
  • 76
  • 50
  • 43
  • 24
  • 24
  • 22
  • Tagged with
  • 7192
  • 1340
  • 1039
  • 788
  • 628
  • 592
  • 539
  • 485
  • 478
  • 467
  • 464
  • 442
  • 371
  • 362
  • 355
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Une approche globale pour la métrologie 3D automatique multi-systèmes / A global approach for automatic 3D part inspection

Audfray, Nicolas 17 December 2012 (has links)
La métrologie 3D permet la vérification de spécifications géométriques et dimensionnelles sur des pièces mécaniques. Ce contrôle est classiquement réalisé à partir de mesures avec des capteurs à contact montés sur des machines à mesurer tridimensionnelles. Ce type de mesures offre une très grande qualité de données acquises mais requiert un temps d'exécution relativement long. Les présents travaux s'attachent donc à développer les mesures optiques dans le cadre de la métrologie 3D qui, avec une qualité diminuée, permettent une exécution beaucoup plus rapide. L'absence de norme concernant ces systèmes de mesure a pour conséquence leur utilisation rare dans le cadre de la métrologie. En effet, le choix d'un système est généralement réalisé à partir de spécifications sur sa qualité. Nous proposons donc une méthode de qualification des systèmes de mesure optiques permettant de quantifier la qualité des données qu'ils fournissent. Les données ainsi qualifiées sont stockées dans une base de données. Un processus global d'inspection 3D multi-systèmes est mis en place, permettant le choix du système de numérisation le mieux adapté (à contact ou sans contact) en termes de qualité et de coût de numérisation, à partir des données qualifiées de la base de données. Lors de l'utilisation de systèmes de mesure optiques, la baisse de qualité est essentiellement due au bruit de numérisation inhérent à ce type de systèmes. Un filtre permettant d'éliminer ce bruit, tout en gardant le défaut de forme de la surface, est mis en place dans le processus afin de rendre possible la vérification de spécifications avec des intervalles de tolérance faibles à l'aide de systèmes de mesure optiques. / 3D metrology allows GD\&{}T verification on mechanical parts. This verification is usually calculated using data obtained with a touch probe mounted on a coordinate measuring machine. Such a measurement offers a high data quality but requires an expensive processing time. The proposed research aims at expanding optical measurements in 3D metrology, reducing execution time but with a lower data quality. The lack of standard in this field makes the use of optical sensors uncommon in 3D metrology. Indeed, the system selection is mostly carried out from its quality specifications. Therefore we propose a protocol to assess the quality of optical measuring systems that allows in particular quantification of acquired data quality. The results of measuring system qualification are stored in a database. Taking advantages of this database, a global multi-system 3D inspection process is set up allowing the selection of the best digitizing system (contact or contactless) in terms of quality and digitizing cost. When using optical sensors, the poor quality is mostly due to digitizing noise inherent to this kind of systems. A filter that removes noise, keeping the form deviation of the surface, is proposed in the process to make possible the specification verification for applications with small tolerance intervals using optical systems.
132

Contributions to the 3D city modeling : 3D polyhedral building model reconstruction from aerial images and 3D facade modeling from terrestrial 3D point cloud and images / Contributions à la modélisation 3D des villes : reconstruction 3D de modèles de bâtiments polyédriques à partir d'images aériennes et modélisation 3D de façades à partir de nuage de points 3D et d'images terrestres

Hammoudi, Karim 15 December 2011 (has links)
L'objectif principal de ce travail est le développement de recherches en modélisation 3D du bâti. En particulier, la recherche en reconstruction 3D de bâtiment est un sujet très développé depuis les années 90. Malgré tout, il paraît nécessaire de poursuivre les recherches dans cet axe étant données que les approches actuelles consacrées à la reconstruction 3D de bâtiment (bien qu'efficaces) rencontrent encore des difficultés en terme de généralisation, de cohérence et de précision. Par ailleurs, les récents développements des systèmes d'acquisitions de rues tel que les systèmes de cartographie mobile ouvrent de nouvelles perspectives d'amélioration de la modélisation des bâtiments dans le sens ou les données terrestres (très précises et résolus) peuvent être exploitées avec davantage de cohérence (en comparaison à l'aérien) pour enrichir les modèles de bâtiments au niveau des façades (la géométrie, la texture).Ainsi, des approches de modélisation aériennes et terrestres sont individuellement proposées. Au niveau aérien, nous décrivons une approche directe et dépourvu d'extraction et d'assemblage de primitives géométriques en vue de la reconstruction 3D de modèles polyédriques simples de bâtiments à partir d'un jeu d'images aériennes calibrées. Au niveau terrestre, plusieurs approches qui décrivent essentiellement un pipeline pour la modélisation 3D des façades urbaines sont proposées; à savoir, la segmentation et classification de nuage de rues urbaines, la modélisation géométrique des façades urbaines et le texturage des façades urbaines comportant des occultations causées par d'autres objets du mobilier urbains / The aim of this work is to develop research on 3D building modeling. In particular, the research in aerial-based 3D building reconstruction is a topic very developed since 1990. However, it is necessary to pursue the research since the actual approaches for 3D massive building reconstruction (although efficient) still encounter problems in generalization, coherency, accuracy. Besides, the recent developments of street acquisition systems such as Mobile Mapping Systems open new perspectives for improvements in building modeling in the sense that the terrestrial data (very dense and accurate) can be exploited with more performance (in comparison to the aerial investigation) to enrich the building models at facade level (e.g., geometry, texturing).Hence, aerial and terrestrial based building modeling approaches are individually proposed. At aerial level, we describe a direct and featureless approach for simple polyhedral building reconstruction from a set of calibrated aerial images. At terrestrial level, several approaches that essentially describe a 3D urban facade modeling pipeline are proposed, namely, the street point cloud segmentation and classification, the geometric modeling of urban facade and the occlusion-free facade texturing
133

Modélisation et score de complexes protéine-ARN / Modelling and scoring of protein-RNA complexes

Guilhot-Gaudeffroy, Adrien 29 September 2014 (has links)
Cette thèse présente des résultats dans le domaine de la prédiction d’interactions protéine-ARN. C’est un domaine de recherche très actif, pour lequel la communauté internationale organise régulièrement des compétitions pour évaluer différentes techniques de prédictions in silico d’interactions protéine-protéine et protéine-ARN sur des données benchmarks (CAPRI, Critical Assessment of PRedictedInteractions), par prédiction en aveugle et en temps limité. Dans ce cadre, de nombreuses approches reposant sur des techniques d’apprentissage supervisé ont récemment obtenus de très bons résultats.Nos travaux s’inscrivent dans cette démarche.Nous avons travaillé sur des jeux de données de 120 complexes protéine-ARN extraits de la PRIDB non redondante (Protein-RNA Interface DataBase, banque de données de référence pour les interactions protéine-ARN). La méthodologie de prédiction d'interactions protéine-ARN a aussi été testée sur 40 complexes issus de benchmarks de l'état de l'art et indépendants des complexes de la PRIDB non redondante. Le faible nombre de structures natives et la difficulté de générer in silico des structures identiques à la solution in vivo nous a conduit à mettre en place une stratégie de génération de candidats par perturbation de l’ARN partenaire d’un complexe protéine-ARN natif. Les candidats ainsi obtenus sont considérés comme des conformations presque-natives si elles sont suffisamment proches du natif. Les autres candidats sont des leurres. L’objectif est de pouvoir identifier les presque natifs parmi l’ensemble des candidats potentiels, par apprentissage supervisé d'une fonction de score.Nous avons conçu pour l'évaluation des fonctions de score une méthodologie de validation croisée originale appelée le leave-"one-pdb"-out, où il existe autant de strates que de complexes protéine-ARN et où chaque strate est constituée des candidats générés à partir d'un complexe. L’une des approches présentant les meilleures performances à CAPRI est l’approche RosettaDock, optimisée pour la prédiction d’interactions protéine-protéine. Nous avons étendu la fonction de score native de RosettaDock pour résoudre la problématique protéine-ARN. Pour l'apprentissage de cette fonction de score, nous avons adapté l'algorithme évolutionnaire ROGER (ROC-based Genetic LearnER) à l'apprentissage d'une fonction logistique. Le gain obtenu par rapport à la fonction native est significatif.Nous avons aussi mis au point d'autres modèles basés sur des approches de classifieurs et de métaclassifieurs, qui montrent que des améliorations sont encore possibles.Dans un second temps, nous avons introduit et mis en oeuvre une nouvelle stratégie pour l’évaluation des candidats qui repose sur la notion de prédiction multi-échelle. Un candidat est représenté à la fois au niveau atomique, c'est-à-dire le niveau de représentation le plus détaillé, et au niveau dit “gros-grain”où nous utilisons une représentation géométrique basée sur des diagrammes de Voronoï pour regrouper ensemble plusieurs composants de la protéine ou de l’ARN. L'état de l'art montre que les diagrammes de Voronoï ont déjà permis d'obtenir de bons résultats pour la prédiction d'interactions protéine-protéine. Nous en évaluons donc les performances après avoir adapté le modèle à la prédiction d'interactions protéine-ARN. L’objectif est de pouvoir rapidement identifier la zone d’interaction (épitope) entre la protéine et l’ARN avant d’utiliser l’approche atomique, plus précise,mais plus coûteuse en temps de calcul. L’une des difficultés est alors de pouvoir générer des candidats suffisamment diversifiés. Les résultats obtenus sont prometteurs et ouvrent desperspectives intéressantes. Une réduction du nombre de paramètres impliqués de même qu'une adaptation du modèle de solvant explicite pourraient en améliorer les résultats. / My thesis shows results for the prediction of protein-RNA interactions with machine learning. An international community named CAPRI (Critical Assessment of PRedicted Interactions) regularly assesses in silico methods for the prediction of the interactions between macromolecules. Using blindpredictions within time constraints, protein-protein interactions and more recently protein-RNA interaction prediction techniques are assessed.In a first stage, we worked on curated protein-RNA benchmarks, including 120 3D structures extracted from the non redundant PRIDB (Protein-RNA Interface DataBase). We also tested the protein-RNA prediction method we designed using 40 protein-RNA complexes that were extracted from state-ofthe-art benchmarks and independent from the non redundant PRIDB complexes. Generating candidates identical to the in vivo solution with only a few 3D structures is an issue we tackled by modelling a candidate generation strategy using RNA structure perturbation in the protein-RNAcomplex. Such candidates are either near-native candidates – if they are close enough to the solution– or decoys – if they are too far away. We want to discriminate the near-native candidates from thedecoys. For the evaluation, we performed an original cross-validation process we called leave-”onepdb”-out, where there is one fold per protein-RNA complex and each fold contains the candidates generated using one complex. One of the gold standard approaches participating in the CAPRI experiment as to date is RosettaDock. RosettaDock is originally optimized for protein-proteincomplexes. For the learning step of our scoring function, we adapted and used an evolutionary algorithm called ROGER (ROC-based Genetic LearnER) to learn a logistic function. The results show that our scoring function performs much better than the original RosettaDock scoring function. Thus,we extend RosettaDock to the prediction of protein-RNA interactions. We also evaluated classifier based and metaclassifier-based approaches, which can lead to new improvements with further investigation.In a second stage, we introduced a new way to evaluate candidates using a multi-scale protocol. A candidate is geometrically represented on an atomic level – the most detailed scale – as well as on a coarse-grained level. The coarse-grained level is based on the construction of a Voronoi diagram over the coarse-grained atoms of the 3D structure. Voronoi diagrams already successfully modelled coarsegrained interactions for protein-protein complexes in the past. The idea behind the multi-scale protocolis to first find the interaction patch (epitope) between the protein and the RNA before using the time consuming and yet more precise atomic level. We modelled new scoring terms, as well as new scoring functions to evaluate generated candidates. Results are promising. Reducing the number of parameters involved and optimizing the explicit solvent model may improve the coarse-grained level predictions.
134

Desenvolvimento e avaliação de tutoriais de moléculas como ferramenta de ensino: o caso dos tutoriais \"estrutura e estabilidade do DNA\" e \"estabilidade do DNA / Development and evaluation of molecular tutorials as a teaching tool: the case of the \"DNA Structure and Stability\" and \"DNA Stability\" tutorials

Fonseca, Larissa Assis Barony Valadares 06 May 2019 (has links)
Os tutoriais Estrutura e Estabilidade do DNA e Estabilidade do DNA são produtos da metodologia proposta neste trabalho para o desenvolvimento de tutorias de moléculas com animações interativas 3D, a qual estabelece as cinco diretrizes: i. Selecionar conteúdo de Bioquímica, Biologia molecular e Química; ii. Definir a distribuição do conteúdo e a estrutura do tutorial; iii. Integrar textos, diagramas, tabelas e animações moleculares 3D; iv. Desenvolver as animações integrando diferentes modos de representação 3D; e v. Exibir simultaneamente na página web animação molecular 3D associada com texto, diagrama ou tabela. A metodologia é fundamentada em Princípios de Aprendizagem por Multimídia de Mayer, no intuito de favorecer a aprendizagem da molécula, o que exige mais do que a mera exposição da estrutura molecular em três dimensões, sendo necessária a articulação da estrutura com mídias acessórias (3ª diretriz) para conduzir a exploração guiada e paulatina da molécula. No caso de biopolímeros, como o DNA, o aumento da complexidade estrutural torna mais necessário guiar o aprendiz pela exploração da molécula, sendo preciso explicar as interações químicas entre os monômeros e entre os monômeros com o meio aquoso para que o aprendiz compreenda a estrutura e estabeleça relações desta com as atividades biológicas do DNA. Assim, um dos princípios da metodologia é o estabelecimento de unidades conceituais que exploram gradativamente o biopolímero por meio de um texto constituído por pergunta e resposta e, em seguida, uma animação acionada pelo botão Visualize 3D. A estratégia de apresentar o conteúdo de forma segmentadaobjetiva favorecer o estabelecimento de conexões entre o conhecimento novo, apresentado na unidade conceitual, e os conceitos preexistentes do aprendiz. O texto de cada unidade possui uma organização hierárquica: i. Resposta resumida para a pergunta; ii. detalhamento da resposta; iii. estabelecimento de relação estrutura/atividade biológica. Além disso, existem hiperlinks para explicação adicional sobre conceitos químicos e bioquímico, tais como ligação de hidrogênio, fendas do DNA, etc. Também foram adotados princípios educacionais para a criação das animações. Os recursos utilizados para construção dos tutoriais foram ferramentas do Laboratório Integrado de Química e Bioquímica e estruturas moleculares obtidas do RCSB Protein Data Bank. Para avaliação do potencial instrucional dos tutoriais foram realizadas atividades com alunos de pós-graduação e graduação que responderam a um questionário antes e outro após usar os tutoriais. Também foram conduzidas entrevistas com alunos de graduação no intuito de averiguar a percepção destes sobre a metodologia de desenvolvimento de tutoriais e a aprendizagem com o uso de tutoriais de moléculas. A análise dos questionários evidenciou uma melhoria no desempenho dos estudantes após usar os tutoriais, além de eles considerarem que aprenderam e que gostariam de usar mais tutoriais de moléculas como ferramenta de ensino e aprendizagem, o que é um indicativo do potencial da metodologia proposta neste trabalho. / The \"DNA Structure and Stability\" and \"DNA Stability\" tutorials are products of the methodology proposed in this work for the development of molecular tutorials with 3D interactive animations, which establishes the five guidelines: i. Select contents of Biochemistry, Molecular Biology and Chemistry; ii. Define the content distribution and the structure of the tutorial; iii. Integrate texts, diagrams, tables and 3D molecular animations; iv. Develop the animations integrating different modes of 3D representation; and v. Simultaneously display on the web page 3D molecular animation associated with text, diagram or table. The methodology is based on Mayer\'s Multimedia Learning Principles, in order to favor the learning of the molecule, which requires more than the mere exposition of the molecular structure in three dimensions, being necessary the articulation of the structure with accessory medias (3rd guideline) to conduct the guided and gradual exploration of the molecule. In the case of biopolymers, such as DNA, increasing structural complexity makes it more necessary to guide the learner through the exploration of the molecule. It is necessary to explain the chemical interactions between the monomers and between the monomers with the aqueous medium so that the learner understands the structure and establish relations between this and the DNA biological activities. Thus, one of the principles of the methodology is the establishment of conceptual units that gradually explore the biopolymer through a text consisting of question and answer and then an animation triggered by the button \"Visualize 3D\". The strategy of presenting content in a segmented way aims to favor the establishment of connections between the new knowledge presented in the conceptual unit and thepreexisting ones in the cognitive structure of the learner. The text of each unit has a hierarchical organization: i. Brief answer to the question; ii. detailing the response; iii. establishment of the relation between structure and biological activity. In addition, there are hyperlinks for further explanation about chemical and biochemical concepts, such as hydrogen bonding, DNA grooves, etc. Educational principles were also adopted for the creation of animations. The resources used to construct the tutorials were tools of the Integrated Laboratory of Chemistry and Biochemistry and the molecular structures obtained from the RCSB Protein Data Bank. In order to evaluate the instructional potential of the tutorials, activities were carried out with undergraduate and graduate students who answered a questionnaire before and after using the tutorials. We also conducted interviews with undergraduate students in order to ascertain their perception about the methodology of developing tutorials and the learning with the use of molecular tutorials. The analysis of the questionnaires evidenced an improvement in students\' performance after using the tutorials, in addition they also consider that have learned and would like to use more molecular tutorials as a teaching and learning tool, which is an indicative of the potential of the methodology proposed in this work.
135

Analyse statistique de la diversité en anthropometrie tridimensionnelle / Statistical analysis of diversity in three-dimensional anthropometry

Kollia, Aikaterini 13 January 2016 (has links)
L’anthropométrie est le domaine scientifique qui étudie les dimensions du corps humain. La complexité de la morphologie du corps nécessite une analyse 3D, aujourd’hui permise par les progrès des scanners 3D. L’objectif de cette étude est de comparer les populations et utiliser les résultats pour mieux adapter les produits sportifs à la morphologie des utilisateurs. Des campagnes de mensuration 3D ont été réalisées et des algorithmes de traitement automatique ont été créés pour analyser les nuages de points des sujets scannés. Basés sur les méthodes d’images et de géométrie, ces algorithmes repèrent des points anatomiques, calculent des mesures 1D, alignent les sujets scannés et créent des modèles anthropométriques 3D représentatifs des populations. Pour analyser les caractéristiques anthropométriques, des statistiques de premier ordre et factorielles ont été adaptées pour être utilisées dans l’espace 3D. Les méthodes ont été appliquées à trois parties : le pied, la tête et la poitrine. Les différences morphologiques entre les populations, mais également au sein d’une population donnée, ont été révélées. Par exemple, la différence à chaque point de la tête entre des têtes a été calculée. Les statistiques en trois dimensions ont aussi permis de mettre en évidence l’asymétrie de la tête. La méthode de création de modèles anthropométriques est plus adaptée à nos applications que les méthodes dans la littérature. L’analyse en trois dimensions permet d’obtenir des résultats qui ne sont pas visibles par les analyses 1D. Les connaissances acquises par cette étude sont utilisées pour la conception de différents produits vendus dans les magasins DECATHLON à travers le monde. / Anthropometry is the scientific field that studies human body dimensions (from the greek άνθρωπος (human) + μέτρον (measure)). Anthropometrical analysis is based actually on 1D measurements (head circumference, length, etc). However, the body’s morphological complexity requires 3D analysis. This is possible due to recent progress of 3D scanners. The objective of this study is to compare population’s anthropometry and use results to adapt sporting goods to user’s morphology. For this purpose, 3D worldwide measurement campaigns were realized and automated treatment algorithms were created in order to analyze the subjects’ point cloud. Based on image processing methods and on shape geometry, these algorithms detect anatomical landmarks, calculate 1D measurements, align subjects and create representative anthropometrical 3D models. In order to analyze morphological characteristics, different statistical methods including components’ analysis, were adapted for use in 3D space. The methods were applied in three body parts: the foot, the head and the bust. The morphological differences between and inside the populations were studied. For example, the difference in each point of the head, between Chinese and European head, was calculated. The statistics in three dimensions, permitted also to show the asymmetry of the head. The method to create anthropometrical models is more adapted to our applications than the methods used in the literature. The analysis in three dimensions, can give results that they are not visible from 1D analyses. The knowledge of this thesis is used for the conception of different products that they are sold in DECATHLON stores around the world.
136

Emprego de simulação computacional para avaliação de objetos simuladores impressos 3D para aplicação em dosimetria clínica / Use of computational simulation for evaluation of 3D printed phantoms for application in clinical dosimetry

Valeriano, Caio César Santos 29 May 2017 (has links)
O propósito de um objeto simulador é representar a alteração do campo de radiação provocada pela absorção e espalhamento em um dado tecido ou órgão de interesse. Suas características geométricas e de composição devem estar próximos o máximo possível aos valores associados ao seu análogo natural. Estruturas anatômicas podem ser transformadas em objetos virtuais 3D por técnicas de imageamento médico (p. ex. Tomografia Computadorizada) e impressas por prototipagem rápida utilizando materiais como, por exemplo, o ácido poliláctico. Sua produção para pacientes específicos requer o preenchimento de requisitos como a acurácia geométrica com a anatomia do individuo e a equivalência ao tecido, de modo que se possa realizar medidas utilizáveis, e ser insensível aos efeitos da radiação. O objetivo desse trabalho foi avaliar o comportamento de materiais impressos 3D quando expostos a feixes de fótons diversos, com ênfase para a qualidade de radiotherapia (6 MV), visando a sua aplicação na dosimetria clínica. Para isso foram usados 30 dosímetros termoluminescentes de LiF:Mg,Ti. Foi analisada também a equivalência entre o PMMA e o PLA impresso para a resposta termoluminescente de 30 dosímetros de CaSO4:Dy. As irradiações com feixes de fótons com qualidade de radioterapia foram simuladas com o uso do sistema de planejamento Eclipse™, com o Anisotropic Analytical Algorithm e o Acuros&reg XB Advanced Dose Calculation algorithm. Além do uso do Eclipse™ e dos testes dosimétricos, foram realizadas simulações computacionais utilizando o código MCNP5. As simulações com o código MCNP5 foram realizadas para calcular o coeficiente de atenuação de placas impressas expostas a diversas qualidades de raios X de radiodiagnóstico e para desenvolver um modelo computacional de placas impressas 3D. / The purpose of a phantom is to represent the change in the radiation field caused by absorption and scattering in a given tissue or organ of interest. Its geometrical characteristics and composition should be as close as possible to the values associated with its natural analogue. Anatomical structures can be transformed into 3D virtual objects by medical imaging techniques (e.g. Computed Tomography) and printed by rapid prototyping using materials, for example, polylactic acid. Its production for specific pacients requires fulfilling requirements such as geometric accuracy with the individual\'s anatomy and tissue equivalence, so that usable measurements can be made, and be insensitive to the radiation effects. The objective of this work was to evaluate the behavior of 3D printed materials when exposed to different photon beams, with emphasis on the quality of radiotherapy (6 MV), aiming its application in clinical dosimetry. For this, 30 thermoluminescent dosimeters of LiF:Mg,Ti were used. The equivalence between the PMMA and the printed PLA for the thermoluminescent response of 30 dosimeters of CaSO4: Dy was also analyzed. The irradiations with radiotherapy photon beams were simulated using the Eclipse™ treatment planning system,with the Anisotropic Analytical Algorithm and the Acuros&reg XB Advanced Dose Calculation algorithm. In addition to the use of Eclipse™ and dosimetric tests, computational simulations were realized using the MCNP5 code. Simulations with the MCNP5 code were performed to calculate the attenuation coefficient of printed plates exposed to different radiodiagnosis X-rays qualities and to develop a computational model of 3D printed plates.
137

3D elektronová tomografie a korelativní mikroskopie

BÍLÝ, Tomáš January 2019 (has links)
Electron tomography allows visualization of objects in a form of reconstructed 3D virtual volumes with resolution power of electron microscopy. The thesis is focused primarily on biological applications of electron tomography applied on tilt series images acquired in transmission electron microscope at room temperature. Specifically, the interaction of tick-borne encephalitis virus with neural cells and 3D ultrastructure of the central electron-dense part of the flagellum 9 + 1 (Caryophyllaeides fennica) were investigated. Finally, electron tomography was combined and correlated with atomic force microscopy to allow repetitive examination of ultrathin sections on electron microscopy grids.
138

Vision et reconstruction 3D : application à la robotique mobile / Vision and 3D reconstruction : application to Mobile Robotics

Hmida, Rihab 16 December 2016 (has links)
Avec l’évolution des processus technologiques, l’intérêt pour la robotique mobile n’a de cesse d’augmenter depuis quelques années, notamment pour remplacer l’homme dans des environnements à risque (zones radioactives, robots militaires) ou des zones qui lui sont inaccessibles (exploration planétaire ou sous-marine), ou à des échelles différentes (robot à l’intérieur d’une canalisation, voire robot chirurgical à l’intérieur du corps humain). Dans ce même contexte, les systèmes de navigation destinés plus particulièrement à l’exploration sous-marine suscitent de plus en plus l’intérêt de plusieurs géologues, roboticiens et scientifiques en vue de mieux connaître et caractériser les réseaux sous-marins. Pour des raisons de sécurité optimale, de nouvelles technologies (Radar, Sonar, système de caméras,..) ont été développées pour remplacer les plongeurs humains.C’est dans ce cadre que s’intègre les travaux de cette thèse ayant comme objectif la mise en œuvre d’un système de vision stéréoscopique permettant l’acquisition d’informations utiles et le développement d’un algorithme pour la restitution de la structure 3D d’un environnement confiné aquatique. Notre système est composé d’une paire de capteurs catadioptriques et d’une ceinture de pointeurs lasers permettant d’identifier des amers visuels de la scène et d’une plateforme pour l’exécution du processus de traitement des images acquises. La chaîne de traitement est précédée par une phase d’étalonnage effectuée hors-ligne pour la modélisation géométrique du système complet. L’algorithme de traitement consiste à une analyse pixellique des images stéréoscopiques pour l’extraction des projections lasers 2D et reconstruire leurs correspondants 3D en se basant sur les paramètres d’étalonnage.La mise en œuvre du système complet sur une plateforme logicielle demande un temps d’exécution supérieur à celui exigé par l’application. Les travaux clôturant ce mémoire s’adressent à cette problématique, et proposent une solution permettant de simplifier le développement et l’implémentation d’applications temps-réel sur des plateformes basées sur un dispositif FPGA. La mise en œuvre de notre application a été effectuée et une étude des performances est présentée tout en considérant les besoins de l’application et ses exigences de point de vue précision, rapidité et taux d’efficacité. / With the development of technological processes, interest in mobile robotics is constantly increasing in recent years, particularly to replace human in environments of risk (radioactive areas, military robots) or areas that are inaccessible (planetary or underwater exploration), or at different scales (robot within a pipeline or surgical robot inside the human body). In the same context, navigation systems are designed specifically for underwater exploration which is attracting more and more interest of several geologists, robotics and scientists in order to better understand and characterize submarine environment. For optimal security, new technologies (radar, sonar, camera system, ..) have been developed to replace human.In this context, the work of this thesis is focusing with the aim of implementing a stereoscopic vision system to acquire useful information and the development of an algorithm for the restoration of the 3D structure of a confined aquatic environment. Our system consists of a pair of catadioptric sensors and a laser pointer belt permitting to identify visual landmarks of the scene and a platform for the implementation of the acquired image processing. The processing chain is preceded by an offline calibration phase to generate the geometric modeling of the complete system. The processing algorithm consists of pixel-wise analysis of the stereoscopic images for the extraction of 2D laser projections and rebuilds their 3D corresponding based on the calibration parameters.The implementation of the complete system on a software platform requests an execution time higher than that required by the application. The work closing the memory is addressed to this problem and proposes a solution to simplify the development and implementation of real-time applications on platforms based on a FPGA device. The implementation of our application was performed and a study of performance is presented, considering the requirements of the application in terms of precision, speed and efficiency rate.
139

Emprego de simulação computacional para avaliação de objetos simuladores impressos 3D para aplicação em dosimetria clínica / Use of computational simulation for evaluation of 3D printed phantoms for application in clinical dosimetry

Caio César Santos Valeriano 29 May 2017 (has links)
O propósito de um objeto simulador é representar a alteração do campo de radiação provocada pela absorção e espalhamento em um dado tecido ou órgão de interesse. Suas características geométricas e de composição devem estar próximos o máximo possível aos valores associados ao seu análogo natural. Estruturas anatômicas podem ser transformadas em objetos virtuais 3D por técnicas de imageamento médico (p. ex. Tomografia Computadorizada) e impressas por prototipagem rápida utilizando materiais como, por exemplo, o ácido poliláctico. Sua produção para pacientes específicos requer o preenchimento de requisitos como a acurácia geométrica com a anatomia do individuo e a equivalência ao tecido, de modo que se possa realizar medidas utilizáveis, e ser insensível aos efeitos da radiação. O objetivo desse trabalho foi avaliar o comportamento de materiais impressos 3D quando expostos a feixes de fótons diversos, com ênfase para a qualidade de radiotherapia (6 MV), visando a sua aplicação na dosimetria clínica. Para isso foram usados 30 dosímetros termoluminescentes de LiF:Mg,Ti. Foi analisada também a equivalência entre o PMMA e o PLA impresso para a resposta termoluminescente de 30 dosímetros de CaSO4:Dy. As irradiações com feixes de fótons com qualidade de radioterapia foram simuladas com o uso do sistema de planejamento Eclipse™, com o Anisotropic Analytical Algorithm e o Acuros&reg XB Advanced Dose Calculation algorithm. Além do uso do Eclipse™ e dos testes dosimétricos, foram realizadas simulações computacionais utilizando o código MCNP5. As simulações com o código MCNP5 foram realizadas para calcular o coeficiente de atenuação de placas impressas expostas a diversas qualidades de raios X de radiodiagnóstico e para desenvolver um modelo computacional de placas impressas 3D. / The purpose of a phantom is to represent the change in the radiation field caused by absorption and scattering in a given tissue or organ of interest. Its geometrical characteristics and composition should be as close as possible to the values associated with its natural analogue. Anatomical structures can be transformed into 3D virtual objects by medical imaging techniques (e.g. Computed Tomography) and printed by rapid prototyping using materials, for example, polylactic acid. Its production for specific pacients requires fulfilling requirements such as geometric accuracy with the individual\'s anatomy and tissue equivalence, so that usable measurements can be made, and be insensitive to the radiation effects. The objective of this work was to evaluate the behavior of 3D printed materials when exposed to different photon beams, with emphasis on the quality of radiotherapy (6 MV), aiming its application in clinical dosimetry. For this, 30 thermoluminescent dosimeters of LiF:Mg,Ti were used. The equivalence between the PMMA and the printed PLA for the thermoluminescent response of 30 dosimeters of CaSO4: Dy was also analyzed. The irradiations with radiotherapy photon beams were simulated using the Eclipse™ treatment planning system,with the Anisotropic Analytical Algorithm and the Acuros&reg XB Advanced Dose Calculation algorithm. In addition to the use of Eclipse™ and dosimetric tests, computational simulations were realized using the MCNP5 code. Simulations with the MCNP5 code were performed to calculate the attenuation coefficient of printed plates exposed to different radiodiagnosis X-rays qualities and to develop a computational model of 3D printed plates.
140

Contributions to accurate and efficient cost aggregation for stereo matching

Chen, Dongming 12 March 2015 (has links)
Les applications basées sur 3D tels que les films 3D, l’impression 3D, la cartographie 3D, la reconnaissance 3D, sont de plus en plus présentes dans notre vie quotidienne; elles exigent une reconstruction 3D qui apparaît alors comme une technique clé. Dans cette thèse, nous nous intéressons à l’appariement stéréo qui est au coeur de l’acquisition 3D. Malgré les nombreuses publications traitant de l’appariement stéréo, il demeure un défi en raison des contraintes de précision et de temps de calcul: la conduite autonome requiert le temps réel; la modélisation d’objets 3D exige une précision et une résolution élevées. La méthode de pondération adaptative des pixels de support (adaptative-supportweight), basée sur le bien connu filtre bilatéral, est une méthode de l’état de l’art, de catégorie locale, qui en dépit de ses potentiels atouts peine à lever l’ambiguïté induite par des pixels voisins, de disparités différentes mais avec des couleurs similaires. Notre première contribution, à base de filtre trilatéral, est une solution pertinente qui tout en conservant les avantages du filtre bilatéral permet de lever l’ambiguïté mentionnée. Evaluée sur le corpus de référence, communément acceptée, Middlebury, elle se positionne comme étant la plus précise au moment où nous écrivons ces lignes. Malgré ces performances, la complexité de notre première contribution est élevée. Elle dépend en effet de la taille de la fenêtre support. Nous avons proposé alors une implémentation récursive du filtre trilatérale, inspirée par les filtres récursifs. Ici, les coûts bruts en chaque pixel sont agrégés à travers une grille organisée en graphe. Quatre passages à une dimension permettent d’atteindre une complexité en O(N), indépendante cette fois de la taille de la fenêtre support. C’est-à-dire des centaines de fois plus rapide que la méthode originale. Pour le calcul des pondérations des pixels du support, notre méthode basée sur le filtre trilatéral introduit un nouveau terme, qui est une fonction d’amplitude du gradient. Celui-ci est remarquable aux bords des objets, mais aussi en cas de changement de couleurs et de texture au sein des objets. Or, le premier cas est déterminant dans l’estimation de la profondeur. La dernière contribution de cette thèse vise alors à distinguer les contours des objets de ceux issus du changement de couleur au sein de l’objet. Les évaluations, sur Middlebury, prouvent l’efficacité de la méthode proposée. Elle est en effet plus précise que la méthode basée sur le filtre trilatéral d’origine, mais aussi d’autres méthodes locales. / 3D-related applications are becoming more and more popular in our daily life, such as 3D movies, 3D printing, 3D maps, 3D object recognition, etc. Many applications require realistic 3D models and thus 3D reconstruction is a key technique behind them. In this thesis, we focus on a basic problem of 3D reconstruction, i.e. stereo matching, which searches for correspondences in a stereo pair or more images of a 3D scene. Although various stereo matching methods have been published in the past decades, it is still a challenging task since the high requirement of accuracy and efficiency in practical applications. For example, autonomous driving demands realtime stereo matching technique; while 3D object modeling demands high quality solution. This thesis is dedicated to develop efficient and accurate stereo matching method. The well-known bilateral filter based adaptive support weight method represents the state-of-the-art local method, but it hardly sorts the ambiguity induced by nearby pixels at different disparities but with similar colors. Therefore, we proposed a novel trilateral filter based method that remedies such ambiguities by introducing a boundary strength term. As evaluated on the commonly accepted Middlebury benchmark, the proposed method is proved to be the most accurate local stereo matching method at the time of submission (April 2013). The computational complexity of the trilateral filter based method is high and depends on the support window size. In order to enhance its computational efficiency, we proposed a recursive trilateral filter method, inspired by recursive filter. The raw costs are aggregated on a grid graph by four one-dimensional aggregations and its computational complexity proves to be O(N), which is independent of the support window size. The practical runtime of the proposed recursive trilateral filter based method processing 375 _ 450 resolution image is roughly 260ms on a PC with a 3:4 GHz Inter Core i7 CPU, which is hundreds times faster than the original trilateral filter based method. The trilateral filter based method introduced a boundary strength term, which is computed from color edges, to handle the ambiguity induced by nearby pixels at different disparities but with similar colors. The color edges consist of two types of edges, i.e. depth edges and texture edges. Actually, only depth edges are useful for the boundary strength term. Therefore, we presented a depth edge detection method, aiming to pick out depth edges and proposed a depth edge trilateral filter based method. Evaluation on Middlebury benchmark proves the effectiveness of the proposed depth edge trilateral filter method, which is more accurate than the original trilateral filter method and other local stereo matching methods.

Page generated in 0.0361 seconds