• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 46
  • 26
  • 18
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 317
  • 59
  • 55
  • 52
  • 45
  • 44
  • 43
  • 39
  • 36
  • 30
  • 28
  • 28
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Uma proposta para avaliação do desempenho de câmaras PET/SPECT / A proposal for Evaluating the performance of PET/SPECT Cameras

Suely Midori Aoki 11 December 2002 (has links)
A tomografia por emissão de pósitrons (\"Positron Emission Tomography\" - PET) é uma técnica para obtenção de imagens tomográficas em Medicina Nuclear que permite o estudo da função e do metabolismo do corpo humano em diversos problemas clínicos, através do uso de fármacos marcados por radionuclídeos emissores de pósitrons. As aplicações mais frequentes ocorrem em oncologia, neurologia e cardiologia, através da análise qualitativa e quantitativa dessas imagens. Atualmente, a PET é realizada de duas maneiras: através de sistemas constituídos por anéis formados por alguns milhares de detectores operando em coincidência, chamados de sistemas dedicados; ou com o uso de câmaras PET /SPECT, formadas por dois detectores de cintilação em coincidência, que também servem para estudos com radionuclídeos emissores de fóton único (\"Single Photon Emission Computed Tomography\" - SPECT). O desenvolvimento desses sistemas PET /SPECT tornou viáveis os estudos com a fluor-deoxiglicose, [18 ANTPOT. F]-FDG, um fármaco marcado com 18 ANTIPOT. F (emissor de pósitrons com 109 minutos de meia-vida física), para um número grande de clínicas e hospitais, principalmente por estes serem de uma tecnologia economicamente mais acessível que os realizados com a PET dedicada. Neste presente trabalho, desenvolveu-se uma metodologia para caracterizar e avaliar um sistema PET /SPECT com dois detectores de cintilação e dispositivo com duas fontes pontuais de Cs-137, destinado à obtenção das imagens de transmissão para a correção de atenuação dos fótons. Ela se baseia em adaptações dos testes convencionais de câmaras SPECT, descritos no IAEA TecDoc - 602 - 1991 (\"lnternational Atomic Energy Agency\" - IAEA), e de sistemas PET dedicados, publicados no NEMA NU 2- 1994 (\"National Electrical Manufacturers Association\"NEMA). O resultado foi organizado em forma de roteiros que foram testados em uma câmara da ADAC Laboratories/Philips, a VertexlM - Plus EPIClMJMCDlM - AC, instalada no Serviço de Radioisótopos do lnCor - HCFMUSP (Instituto do Coração - Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo). Esta câmara foi a primeira instalada no Brasil e está sendo utilizada, predominantemente, para estudos oncológicos e de viabilidade miocárdica. O radiofármaco utilizado na obtenção das imagens foi a [18F]-FDG, fornecida regularmente pelo IPEN/CNEN-SP (Instituto de Pesquisas Energéticas e Nucleares/Comissão Nacional de Energia Nuclear - São Paulo), e a reconstrução tomográfica foi realizada com o software próprio do sistema, utilizando-se os parâmetros padrão dos protocolos clínicos. Foram utilizadas fontes pontuais suspensas no ar para as medidas de resolução espacial transversal e lineares imersas na água para as de fração de espalhamento e sensibilidade. Na avaliação da sensibilidade, uniformidade, taxa de eventos verdadeiros, taxa de eventos aleatórios e tempo morto do sistema eletrônico, foram feitas imagens de um simulador físico construído especialmente para o presente trabalho, a partir das instruções da publicação NEMA NU 2 - 1994 para sistemas PET dedicados. A acurácia da correção de atenuação foi verificada através das imagens do simulador físico citado com a inserção de três cilindros de densidades diferentes: água, ar e Teflon. Os roteiros deste trabalho poderão servir de guia para Programas de Controle e Garantia de Qualidade e avaliação da performance de sistemas PET /SPECT com dois detectores de cintilação em coincidência. A implantação destes roteiros pelos centros clínicos que utilizam este tipo de equipamento aumentará a qualidade e a confiabilidade nas imagens resultantes, assim como na sua quantificação. / Positron emission tomography, PET, is a Nuclear Medicine technique that allows the study of human body\'s function and metabolism in many clinical problems, with the help of pharmaceuticals labeled with positron emitters. The most frequent applications occur in oncology, neurology and cardiology, through qualitative and quantitative analysis of these images. Currently, PET is performed in two manners: by using dedicated systems, consisted of rings of thousands of detectors operating in coincidence; or with the use of PET /SPECT cameras, formed by two scintillation detectors in coincidence, which are also used in SPECT studies (single photon emission tomography). The development of PET /SPECT systems made possible the studies with fluor-deoxiglucose, [18F]-FDG, a pharmaceutical labeled with 18F (positron emitter with 109 minutes physical half-life), for a large number of clinics and hospitals, mainly due to their economical accessibility when compared to the dedicated PET studies. In this present work, a method was developed for characterizing and evaluating a PET /SPECT system with two scintillation detectors and device with two point sources of 137Cs, designed to obtain the transmission images for the photon attenuation correction. lt is based on adaptations of the conventional tests of SPECT cameras, described in IAEA TecDoc - 602 - 1991 (\"international Atomic Energy Agency \" - IAEA), and those for dedicated PET systems, published in NEMA NU 2 - 1994 (\"National Electrical Manufacturers Association \" - NEMA). The results were organized in a set of testing protocols and tested in the ADAC Laboratories/Philips camera, the VertexlM - Plus EPIClM/MCDlM - AC, installed in the Radioisotopes Service of lnCor - HCFMUSP (Instituto do Coração - Hospital das clínicas da Faculdade de Medicina da Universidade de São Paulo). This camera was the first one installed in Brazil and is being used, predominantly, for oncological studies and miocardial viability. The radiopharmaceutical used was [18F]-FDG, supplied regularly by IPEN/CNEN-SP (Instituto de Pesquisas Energéticas e Nucleares I Comissão Nacional de Energia Nuclear - São Paulo), and the tomographic reconstruction was performed with the system software, using the standard parameters of the clinical protocols. Point sources suspended in air were used in the measurements of spatial resolution and linear sources immersed in water for scattering fraction and sensitivity measurements. In the evaluation of sensitivity, uniformity, true events, random events and dead time of the electronic system, a phantom was constructed specifically for the present work, from the instructions of NEMA NU 2 - 1994 for dedicated PET systems. The accuracy of the attenuation correction was verified from the images of the phantom with three inserts of different densities: water, air and Teflon. The resultant protocols can serve as a guideline for Programs of Quality Control and Assurance, as well as for the evaluation of the performance of PET /SPECT systems with two scintillation detectors in coincidence. lf implemented by clinical centers that use this type of equipment, it will enhance the quality and confidence of the resulting images, as well as their quantification.
262

Visão computacional para veículos inteligentes usando câmeras embarcadas / Computer vision for intelligent vehicles using embedded cameras

Paula, Maurício Braga de January 2015 (has links)
O uso de sistemas de assistência ao motorista (DAS) baseados em visão tem contribuído consideravelmente na redução de acidentes e consequentemente no auxílio de uma melhor condução. Estes sistemas utilizam basicamente uma câmera de vídeo embarcada (normalmente fixada no para-brisa) com o propósito de extrair informações acerca da rodovia e ajudar o condutor num melhor processo de dirigibilidade. Pequenas distrações ou a perda de concentração podem ser suficientes para que um acidente ocorra. Este trabalho apresenta uma proposta para o desenvolvimento de algoritmos para extrair informações sobre a sinalização em rodovias. Mais precisamente, serão abordados algoritmos de calibração de câmera explorando a geometria da pista, de extração da marcação de pintura (sinalização horizontal) e detecção e identificação de placas de trânsito (sinalização vertical). Os resultados experimentais indicam que o método de calibração de câmera alcançou bons resultados na obtenção dos parâmetros extrínsecos com erros inferiores a 0:5 . O erro médio encontrado nos experimentos com relação a estimativa da altura da câmera foi em torno de 12 cm (erro relativo aproximado de 10%), permitindo explorar o uso da realidade aumentada como uma possível aplicação. A acurácia global para a detecção e reconhecimento da sinalização horizontal (marcas seccionadas, contínuas e mistas) foi acima de 96% perante uma diversidade de situações apresentadas, tais como: sombras, variação de iluminação, degradação do asfalto e pintura. O uso da câmera calibrada para a detecção da sinalização vertical contribui para delimitar o espaço de varredura da janela deslizante do detector, bem como realizar a procura por placas em uma única escala para cada região de busca, caracterizada pela distância ao veículo. Os resultados apresentados reportam uma taxa global de classificação de aproximadamente 99% para o sinal de proibido ultrapassar, considerando-se uma base de dados limitada a 962 amostras. / The use of driver assistance systems (DAS) based on computer vision has helped considerably in reducing accidents and consequently aid in better driving. These systems primarily use an embedded video camera (usually fixed on the windshield) for the purpose of extracting information about the highway and assisting the driver in a better handling process. Small distractions or loss of concentration may be sufficient for an accident to occur. This work presents the development of algorithms to extract information about traffic signs on highways. More specifically, this work will tackle a camera calibration algorithm that exploits the geometry of the road track, algorithms for the extraction of road marking paint (lane markings) and detection and identification of vertical traffic signs. Experimental results indicate that the proposed method for obtaining the extrinsic parameters achieve good results with errors of less than 0:5 . The average error in our experiments, related to the camera height, were around 12 cm (relative error around 10%). Global accuracy for the detection and classification of road lane markings (dashed, solid, dashed-solid, solid-dashed or double solid) were over 96%. Finally, our camera calibration algorithm was used to reduce the search region and to define the scale of a slidingwindow detector for vertical traffic signs. The use of the calibrated camera for the detection of traffic signs contributes to define the scanning area of the sliding window and perform a search for signs on a unique scale for each region of interest, determined by the distance to the vehicle. The results reported a global classification rate of approximately 99% for the no overtaking sign, considering a limited of 962 samples.
263

Desenvolvimento de simuladores renais para uso em medicina nuclear

Dullius, Marcos Alexandre 19 September 2014 (has links)
Quality control programs in nuclear medicine include verifying the efficiency of all equipment used for diagnosis and therapy, including scintillation cameras. To that end, we have developed and evaluated the performance of four phantom kidneys two static anthropomorphic, one semi-dynamic, and one dynamic to acquire static and dynamic renal scintigraphic images. The static anthropomorphic phantoms were used to characterize and evaluate the response of the processing system for different concentrations of radionuclides through static renal scintigraphy images (99mTc-DMSA), obtained with posterior, right posterior oblique, left posterior oblique, and anterior incidences. The static phantoms were made in two ways; one was made of acrylic from a mold of a pair of human kidneys preserved in formalin, and the second was built with acrylonitrile butadiene styrene (ABS), in a 3D printer using the Slicer program, based on a computed tomography (CT) of the thorax, using the Slicer program. The semi-dynamic and dynamic phantoms were constructed to characterize and evaluate images of dynamic renal scintigraphy. In the semi-dynamic phantom, the injection of radiotracer was performed manually, whereas in the dynamic phantom, the radiotracer was automatically injected through an injector system. With the semi-dynamic phantom, it was possible to analyze the formation of a renogram with normal renal scintigraphic appearance using an imaging system. The simulations obtained from the dynamic phantom simulator enabled studies of normal renal scintigraphy and four other forms of renograms. The static anthropomorphic phantom kidneys proved to be efficient for use in evaluations of varying concentrations of radionuclides. The dynamic phantom kidney was useful for analysis of scintigraphic images and obtaining different pathways for elimination of the radioisotope, allowing for analysis of different renograms. Therefore, the new kidney phantoms would be useful for quality control of image processing systems in renal scintigraphy. / Um programa de controle de qualidade em serviços de medicina nuclear abrange a verificação da eficiência de todos os equipamentos utilizados para diagnóstico e terapia, incluindo a câmara de cintilação. Nesse trabalho, desenvolvemos e avaliamos o desempenho de quatro objetos simuladores renais: dois antropomórficos estáticos, um semidinâmico e outro dinâmico para aquisição de imagens cintilográficas renais estáticas e dinâmicas. Os objetos simuladores antropomórficos estáticos foram utilizados para caracterizar e avaliar a resposta do sistema de processamento para diferentes concentrações de radionuclídeos por meio de imagens de cintilografia renal estática (DMSA-99mTc), obtidas com incidências posteriores (POST), oblíqua posterior direita (OPD), oblíqua posterior esquerda (OPE) e anterior. Os objetos simuladores estáticos foram confeccionados de duas formas distintas: o primeiro foi feito de acrílico a partir de molde de um par de rins humano, conservados em formol, e o segundo foi construído de acrilonitrilabutadieno estireno (ABS) em uma impressora 3D, a partir de uma tomografia computadorizada (TC) de tórax, utilizando o programa Slicer. Foram construídos dois objetos simuladores para caracterizar e avaliar imagens da cintilografia renal dinâmica, o primeiro, semidinâmico, em que a injeção do radiotraçador foi realizada de forma manual, e um segundo objeto simulador dinâmico, com injeção automática do radiotraçador, através de um sistema injetor. Com o objeto simulador semidinâmico foi possível analisar a resposta do sistema de processamento de imagens para a forma de renograma com aspecto cintilográfico renal normal. O objeto simulador dinâmico possibilitou estudos simulados de cintilografia renal normal e de outras quatro formas de renogramas. Os novos objetos simuladores estáticos antropomórficos renais se mostraram eficientes para uso em avaliações de variação de concentrações de radionuclídeos e para análise das imagens cintilográficas e obtenção de diferentes formas de eliminação do radioisótopo, permitindo a análise de diferentes renogramas. Portanto, os novos objetos simuladores renais são eficientes para uso em controle de qualidade de cintilografias renais e sistemas de processamento de imagens.
264

Caméras 3D pour la localisation d'un système mobile en environnement urbain / 3D cameras for the localization of a mobile platform in urban environment

Mittet, Marie-Anne 15 June 2015 (has links)
L’objectif de la thèse est de développer un nouveau système de localisation mobile composé de trois caméras 3D de type Kinect et d’une caméra additionnelle de type Fish Eye. La solution algorithmique est basée sur l’odométrie visuelle et permet de calculer la trajectoire du mobile en temps réel à partir des données fournies par les caméras 3D. L’originalité de la méthodologie réside dans l’exploitation d’orthoimages créées à partir des nuages de points acquis en temps réel par les trois caméras. L’étude des différences entre les orthoimages successives acquises par le système mobile permet d’en déduire ses positions successives et d’en calculer sa trajectoire. / The aim of the thesis is to develop a new kind of localization system, composed of three 3D cameras such as Kinect and an additional Fisheye camera. The localization algorithm is based on Visual Odometry principles in order to calculate the trajectory of the mobile platform in real time from the data provided by the 3D cameras.The originality of the processing method lies within the exploitation of orthoimages processed from the point clouds that are acquired in real time by the three cameras. The positions and trajectory of the mobile platform can be derived from the study of the differences between successive orthoimages.
265

Suivi visuel d'objets dans un réseau de caméras intelligentes embarquées / Visual multi-object tracking in a network of embedded smart cameras

Dziri, Aziz 30 October 2015 (has links)
Le suivi d’objets est de plus en plus utilisé dans les applications de vision par ordinateur. Compte tenu des exigences des applications en termes de performance, du temps de traitement, de la consommation d’énergie et de la facilité du déploiement des systèmes de suivi, l’utilisation des architectures embarquées de calcul devient primordiale. Dans cette thèse, nous avons conçu un système de suivi d’objets pouvant fonctionner en temps réel sur une caméra intelligente de faible coût et de faible consommation équipée d’un processeur embarqué ayant une architecture légère en ressources de calcul. Le système a été étendu pour le suivi d’objets dans un réseau de caméras avec des champs de vision non-recouvrant. La chaîne algorithmique est composée d’un étage de détection basé sur la soustraction de fond et d’un étage de suivi utilisant un algorithme probabiliste Gaussian Mixture Probability Hypothesis Density (GMPHD). La méthode de soustraction de fond que nous avons proposée combine le résultat fournie par la méthode Zipfian Sigma-Delta avec l’information du gradient de l’image d’entrée dans le but d’assurer une bonne détection avec une faible complexité. Le résultat de soustraction est traité par un algorithme d’analyse des composantes connectées afin d’extraire les caractéristiques des objets en mouvement. Les caractéristiques constituent les observations d’une version améliorée du filtre GMPHD. En effet, le filtre GMPHD original ne traite pas les occultations se produisant entre les objets. Nous avons donc intégré deux modules dans le filtre GMPHD pour la gestion des occultations. Quand aucune occultation n’est détectée, les caractéristiques de mouvement des objets sont utilisées pour le suivi. Dans le cas d’une occultation, les caractéristiques d’apparence des objets, représentées par des histogrammes en niveau de gris sont sauvegardées et utilisées pour la ré-identification à la fin de l’occultation. Par la suite, la chaîne de suivi développée a été optimisée et implémentée sur une caméra intelligente embarquée composée de la carte Raspberry Pi version 1 et du module caméra RaspiCam. Les résultats obtenus montrent une qualité de suivi proche des méthodes de l’état de l’art et une cadence d’images de 15 − 30 fps sur la caméra intelligente selon la résolution des images. Dans la deuxième partie de la thèse, nous avons conçu un système distribué de suivi multi-objet pour un réseau de caméras avec des champs non recouvrants. Le système prend en considération que chaque caméra exécute un filtre GMPHD. Le système est basé sur une approche probabiliste qui modélise la correspondance entre les objets par une probabilité d’apparence et une probabilité spatio-temporelle. L’apparence d’un objet est représentée par un vecteur de m éléments qui peut être considéré comme un histogramme. La caractéristique spatio-temporelle est représentée par le temps de transition des objets et la probabilité de transition d’un objet d’une région d’entrée-sortie à une autre. Le temps de transition est modélisé par une loi normale dont la moyenne et la variance sont supposées être connues. L’aspect distribué de l’approche proposée assure un suivi avec peu de communication entre les noeuds du réseau. L’approche a été testée en simulation et sa complexité a été analysée. Les résultats obtenus sont prometteurs pour le fonctionnement de l’approche dans un réseau de caméras intelligentes réel. / Multi-object tracking constitutes a major step in several computer vision applications. The requirements of these applications in terms of performance, processing time, energy consumption and the ease of deployment of a visual tracking system, make the use of low power embedded platforms essential. In this thesis, we designed a multi-object tracking system that achieves real time processing on a low cost and a low power embedded smart camera. The tracking pipeline was extended to work in a network of cameras with nonoverlapping field of views. The tracking pipeline is composed of a detection module based on a background subtraction method and on a tracker using the probabilistic Gaussian Mixture Probability Hypothesis Density (GMPHD) filter. The background subtraction, we developed, is a combination of the segmentation resulted from the Zipfian Sigma-Delta method with the gradient of the input image. This combination allows reliable detection with low computing complexity. The output of the background subtraction is processed using a connected components analysis algorithm to extract the features of moving objects. The features are used as input to an improved version of GMPHD filter. Indeed, the original GMPHD do not manage occlusion problems. We integrated two new modules in GMPHD filter to handle occlusions between objects. If there are no occlusions, the motion feature of objects is used for tracking. When an occlusion is detected, the appearance features of the objects are saved to be used for re-identification at the end of the occlusion. The proposed tracking pipeline was optimized and implemented on an embedded smart camera composed of the Raspberry Pi version 1 board and the camera module RaspiCam. The results show that besides the low complexity of the pipeline, the tracking quality of our method is close to the stat of the art methods. A frame rate of 15 − 30 was achieved on the smart camera depending on the image resolution. In the second part of the thesis, we designed a distributed approach for multi-object tracking in a network of non-overlapping cameras. The approach was developed based on the fact that each camera in the network runs a GMPHD filter as a tracker. Our approach is based on a probabilistic formulation that models the correspondences between objects as an appearance probability and space-time probability. The appearance of an object is represented by a vector of m dimension, which can be considered as a histogram. The space-time features are represented by the transition time between two input-output regions in the network and the transition probability from a region to another. Transition time is modeled as a Gaussian distribution with known mean and covariance. The distributed aspect of the proposed approach allows a tracking over the network with few communications between the cameras. Several simulations were performed to validate the approach. The obtained results are promising for the use of this approach in a real network of smart cameras.
266

Calcul de pose dynamique avec les caméras CMOS utilisant une acquisition séquentielle / Dynamic pose estimation with CMOS cameras using sequential acquisition

Magerand, Ludovic 18 December 2014 (has links)
En informatique, la vision par ordinateur s’attache à extraire de l’information à partir de caméras. Les capteurs de celles-ci peuvent être produits avec la technologie CMOS que nous retrouvons dans les appareils mobiles en raison de son faible coût et d’un encombrement réduit. Cette technologie permet d’acquérir rapidement l’image en exposant les lignes de l’image de manière séquentielle. Cependant cette méthode produit des déformations dans l’image s’il existe un mouvement entre la caméra et la scène filmée. Cet effet est connu sous le nom de «Rolling Shutter» et de nombreuses méthodes ont tenté de corriger ces artefacts. Plutôt que de le corriger, des travaux antérieurs ont développé des méthodes pour extraire de l’information sur le mouvement à partir de cet effet. Ces méthodes reposent sur une extension de la modélisation géométrique classique des caméras pour prendre en compte l’acquisition séquentielle et le mouvement entre le capteur et la scène, considéré uniforme. À partir de cette modélisation, il est possible d’étendre le calcul de pose habituel (estimation de la position et de l’orientation de la scène par rapport au capteur) pour estimer aussi les paramètres du mouvement. Dans la continuité de cette démarche, nous présenterons une généralisation à des mouvements non-uniformes basée sur un lissage des dérivées des paramètres de mouvement. Ensuite nous présenterons une modélisation polynomiale du «Rolling Shutter» et une méthode d’optimisation globale pour l’estimation de ces paramètres. Correctement implémenté, cela permet de réaliser une mise en correspondance automatique entre le modèle tridimensionnel et l’image. Pour terminer nous comparerons ces différentes méthodes tant sur des données simulées que sur des données réelles et conclurons. / Computer Vision, a field of Computer Science, is about extracting information from cameras. Their sensors can be produced using the CMOS technology which is widely used on mobile devices due to its low cost and volume. This technology allows a fast acquisition of an image by sequentially exposin the scan-line. However this method produces some deformation in the image if there is a motion between the camera and the filmed scene. This effect is known as Rolling Shutter and various methods have tried to remove these artifacts. Instead of correcting it, previous works have shown methods to extract information on the motion from this effect. These methods rely on a extension of the usual geometrical model of cameras by taking into account the sequential acquisition and the motion, supposed uniform, between the sensor and the scene. From this model, it’s possible to extend the usual pose estimation (estimation of position and orientation of the camera in the scene) to also estimate the motion parameters. Following on from this approach, we will present an extension to non-uniform motions based on a smoothing of the derivatives of the motion parameters. Afterwards, we will present a polynomial model of the Rolling Shutter and a global optimisation method to estimate the motion parameters. Well implemented, this enables to establish an automatic matching between the 3D model and the image. We will conclude with a comparison of all these methods using either simulated or real data.
267

Etalonnage de caméras à champs disjoints et reconstruction 3D : Application à un robot mobile / Non-overlapping camera calibration and 3D reconstruction : Application to Vision-Based Robotics

Lébraly, Pierre 18 January 2012 (has links)
Ces travaux s’inscrivent dans le cadre du projet VIPA « Véhicule Individuel Public Autonome », au cours duquel le LASMEA et ses partenaires ont mis au point des véhicules capables de naviguer automatiquement, sans aucune infrastructure extérieure dédiée, dans des zones urbaines (parkings, zones piétonnes, aéroports). Il est doté de deux caméras, l’une à l’avant, et l’autre à l’arrière. Avant son déploiement, le véhicule doit tout d’abord être étalonné et conduit manuellement afin de reconstruire la carte d’amers visuels dans laquelle il naviguera ensuite automatiquement. Les travaux de cette thèse ont pour but de développer et de mettre en oeuvre des méthodes souples permettant d’étalonner cet ensemble de caméras dont les champs de vue sont totalement disjoints. Après une étape préalable d’étalonnage intrinsèque et un état de l’art sur les systèmes multi-caméra, nous développons et mettons en oeuvre différentes méthodes d’étalonnage extrinsèque (déterminant les poses relatives des caméras à champs de vue disjoints). La première méthode présentée utilise un miroir plan pour créer un champ de vision commun aux différentes caméras. La seconde approche consiste à manoeuvrer le véhicule pendant que chaque caméra observe une scène statique composée de cibles (dont la détection est sous-pixellique). Dans la troisième approche, nous montrons que l’étalonnage extrinsèque peut être obtenu simultanément à la reconstruction 3D (par exemple lors de la phase d’apprentissage), en utilisant des points d’intérêt comme amers visuels. Pour cela un algorithme d’ajustement de faisceaux multi-caméra a été développé avec une implémentation creuse. Enfin, nous terminons par un étalonnage déterminant l’orientation du système multi-caméra par rapport au véhicule. / My research was involved in the VIPA « Automatic Electric Vehicle for Passenger Transportation » project. During which, the LASMEA and its partnerships have developed vehicles able to navigate autonomously, without any outside dedicated infrastructure in an urban environment (parking lots, pedestrian areas, airports). Two cameras are rigidly embedded on a vehicle : one at the front, another at the back. Before being available for autonomous navigation tasks, the vehicle have to be calibrated and driven manually in order to build a visual 3D map (calibration and learning steps). Then, the vehicle will use this map to localize itself and drive autonomously. The goals of this thesis are to develop and apply user friendly methods, which calibrate this set of nonoverlapping cameras. After a first step of intrinsic calibration and a state of the art on multi-camera rigs, we develop and test several methods to extrinsically calibrate non-overlapping cameras (i.e. estimate the camera relative poses). The first method uses a planar mirror to create an overlap between views of the different cameras. The second procedure consists in manoeuvring the vehicle while each camera observes a static scene (composed of a set of targets, which are detected accurately). In a third procedure, we solve the 3D reconstruction and the extrinsic calibration problems simultaneously (the learning step can be used for that purpose) relying on visual features such as interest points. To achieve this goal a multi-camera bundle adjustment is proposed and implemented with a sparse data structures. Lastly, we present a calibration of the orientation of a multi-camera rig relative to the vehicle.
268

Renderização interativa de câmeras virtuais a partir da integração de múltiplas câmeras esparsas por meio de homografias e decomposições planares da cena / Interactive virtual camera rendering from multiple sparse cameras using homographies and planar scene decompositions

Jeferson Rodrigues da Silva 10 February 2010 (has links)
As técnicas de renderização baseadas em imagens permitem que novas visualizações de uma cena sejam geradas a partir de um conjunto de imagens, obtidas a partir de pontos de vista distintos. Pela extensão dessas técnicas para o tratamento de vídeos, podemos permitir a navegação no tempo e no espaço de uma cena obtida a partir de múltiplas câmeras. Nesse trabalho, abordamos o problema de gerar novas visualizações fotorealistas de cenas dinâmicas, com objetos móveis independentes, a partir de vídeos obtidos de múltiplas câmeras com pontos de vista distintos. Os desafios para a solução do problema envolvem a fusão das imagens das múltiplas câmeras minimizando as diferenças de brilho e cor entre elas, a detecção e extração dos objetos móveis da cena e a renderização de novas visualizações combinando um modelo estático da cena com os modelos aproximados dos objetos móveis. Além disso, é importante que novas visualizações possam ser geradas em taxas de quadro interativas de maneira a permitir que um usuário navegue com naturalidade pela cena renderizada. As aplicações dessas técnicas são diversas e incluem aplicações na área de entretenimento, como nas televisões digitais interativas que permitem que o usuário escolha o ponto de vista de filmes ou eventos esportivos, e em simulações para treinamento usando realidade virtual, onde é importante que se haja cenas realistas e reconstruídas a partir de cenas reais. Apresentamos um algoritmo para a calibração das cores capaz de minimizar a diferença de cor e brilho entre as imagens obtidas a partir de câmeras que não tiveram as cores calibradas. Além disso, descrevemos um método para a renderização interativa de novas visualizações de cenas dinâmicas capaz de gerar visualizações com qualidade semelhante à dos vídeos da cena. / Image-based rendering techniques allow the synthesis of novel scene views from a set of images of the scene, acquired from different viewpoints. By extending these techniques to make use of videos, we can allow the navigation in time and space of a scene acquired by multiple cameras. In this work, we tackle the problem of generating novel photorealistic views of dynamic scenes, containing independent moving objects, from videos acquired by multiple cameras with different viewpoints. The challenges presented by the problem include the fusion of images from multiple cameras while minimizing the brightness and color differences between them, the detection and extraction of the moving objects and the rendering of novel views combining a static scene model with approximate models for the moving objects. It is also important to be able to generate novel views in interactive frame rates allowing a user to navigate and interact with the rendered scene. The applications of these techniques are diverse and include applications in the entertainment field, with interactive digital televisions that allow the user to choose the viewpoint while watching movies or sports events, and in virtual-reality training simulations, where it is important to have realistic scenes reconstructed from real scenes. We present a color calibration algorithm for minimizing the color and brightness differences between images acquired from cameras that didn\'t have their colors calibrated. We also describe a method for interactive novel view rendering of dynamic scenes that provides novel views with similar quality to the scene videos.
269

Digital high school photography curriculum

Wolin, Martin Michael 01 January 2003 (has links)
The purpose of this thesis is to create a high school digital photography curriculum that is relevant to real world application and would enable high school students to enter the work force with marketable skills or go on to post secondary education with advanced knowledge in the field of digital imaging.
270

Využití metod projektového řízení ve vybraném podniku / The Use of Methods of the Project Management in Company

Hubert, Michal January 2017 (has links)
The goal of the thesis is to create a project documentation for extencion of services in selected company, which is focused on selling sport cameras in store in Prague and also with e-shop. For create were used the methods of project management.

Page generated in 0.2619 seconds