Spelling suggestions: "subject:"unidirectional""
1 |
Application of an Omnidirectional Camera to Detection of Moving Objects in 3D SpaceHsu, Chiang-Hao 29 August 2011 (has links)
Conventional cameras are usually small in their field of view (FOV) and make the observable region limited. Applications by such a vision system may also limit motion capabilities for robots when it comes to object tracking. Omnidirectional camera has a wide FOV which can obtain environmental data from all directions. In comparison with conventional cameras, the wide FOV of omnidirectional cameras reduces blind regions and improves tracking ability. In this thesis, we assume an omnidirectional camera is mounted on a moving platform, which travels with planar motion. By applying optical flow and CAMShift algorithm to track an object which is non-propelled and only subjected to gravity. Then, by parabolic fitting, least-square method and Levenberg-Marquardt method to predict the 3D coordinate of the object at the current instant and the next instant, we can finally predict the position of the drop point and drive the moving platform to meet the object at the drop point. The tracking operation and drop point prediction can be successfully achieved even if the camera is under planar motion and rotation.
|
2 |
The Application of Index Based, Region Segmentation, and Deep Learning Approaches to Sensor Fusion for Vegetation DetectionStone, David L. 01 January 2019 (has links)
This thesis investigates the application of index based, region segmentation, and deep learning methods to the sensor fusion of omnidirectional (O-D) Infrared (IR) sensors, Kinnect sensors, and O-D vision sensors to increase the level of intelligent perception for unmanned robotic platforms. The goals of this work is first to provide a more robust calibration approach and improve the calibration of low resolution and noisy IR O-D cameras. Then our goal was to explore the best approach to sensor fusion for vegetation detection. We looked at index based, region segmentation, and deep learning methods and compared them with a goal of significant reduction in false positives while maintaining reasonable vegetation detection.
The results are as follows:
Direct Spherical Calibration of the IR camera provided a more consistent and robust calibration board capture and resulted in the best overall calibration results with sub-pixel accuracy
The best approach for sensor fusion for vegetation detection was the deep learning approach, the three methods are detailed in the following chapters with the results summarized here.
Modified Normalized Difference Vegetation Index approach achieved 86.74% recognition and 32.5% false positive, with peaks to 80%
Thermal Region Fusion (TRF) achieved a lower recognition rate at 75.16% but reduced false positives to 11.75% (a 64% reduction)
Our Deep Learning Fusion Network (DeepFuseNet) results demonstrated that deep learning approach showed the best results with a significant (92%) reduction in false positives when compared to our modified normalized difference vegetation index approach. The recognition was 95.6% with 2% false positive.
Current approaches are primarily focused on O-D color vision for localization, mapping, and tracking and do not adequately address the application of these sensors to vegetation detection. We will demonstrate the contradiction between current approaches and our deep sensor fusion (DeepFuseNet) for vegetation detection. The combination of O-D IR and O-D color vision coupled with deep learning for the extraction of vegetation material type, has great potential for robot perception. This thesis will look at two architectures: 1) the application of Autoencoders Feature Extractors feeding a deep Convolution Neural Network (CNN) fusion network (DeepFuseNet), and 2) Bottleneck CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors. We show that the vegetation recognition rate and the number of false detects inherent in the classical indices based spectral decomposition are greatly improved using our DeepFuseNet architecture.
We first investigate the calibration of omnidirectional infrared (IR) camera for intelligent perception applications. The low resolution omnidirectional (O-D) IR image edge boundaries are not as sharp as with color vision cameras, and as a result, the standard calibration methods were harder to use and less accurate with the low definition of the omnidirectional IR camera. In order to more fully address omnidirectional IR camera calibration, we propose a new calibration grid center coordinates control point discovery methodology and a Direct Spherical Calibration (DSC) approach for a more robust and accurate method of calibration. DSC addresses the limitations of the existing methods by using the spherical coordinates of the centroid of the calibration board to directly triangulate the location of the camera center and iteratively solve for the camera parameters. We compare DSC to three Baseline visual calibration methodologies and augment them with additional output of the spherical results for comparison. We also look at the optimum number of calibration boards using an evolutionary algorithm and Pareto optimization to find the best method and combination of accuracy, methodology and number of calibration boards. The benefits of DSC are more efficient calibration board geometry selection, and better accuracy than the three Baseline visual calibration methodologies.
In the context of vegetation detection, the fusion of omnidirectional (O-D) Infrared (IR) and color vision sensors may increase the level of vegetation perception for unmanned robotic platforms. A literature search found no significant research in our area of interest. The fusion of O-D IR and O-D color vision sensors for the extraction of feature material type has not been adequately addressed. We will look at augmenting indices based spectral decomposition with IR region based spectral decomposition to address the number of false detects inherent in indices based spectral decomposition alone. Our work shows that the fusion of the Normalized Difference Vegetation Index (NDVI) from the O-D color camera fused with the IR thresholded signature region associated with the vegetation region, minimizes the number of false detects seen with NDVI alone. The contribution of this work is the demonstration of two new techniques, Thresholded Region Fusion (TRF) technique for the fusion of O-D IR and O-D Color. We also look at the Kinect vision sensor fused with the O-D IR camera. Our experimental validation demonstrates a 64% reduction in false detects in our method compared to classical indices based detection.
We finally compare our DeepFuseNet results with our previous work with Normalized Difference Vegetation index (NDVI) and IR region based spectral fusion. This current work shows that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet deep learning approach out performs the previous NVDI fused with far infrared region segmentation. Our experimental validation demonstrates an 92% reduction in false detects in our method compared to classical indices based detection. This work contributes a new technique for the fusion of O-D vision and O-D IR sensors using two deep CNN feature extractors feeding into a fully connected CNN Network (DeepFuseNet).
|
3 |
Omnidirectional Mobile Mechanisms and Integrated Motor Mechanisms for Wheeled Locomotion Devices / 車輪式移動装置用の全方向移動機構と統合型モータ機構の研究Terakawa, Tatsuro 25 March 2019 (has links)
付記する学位プログラム名: デザイン学大学院連携プログラム / 京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第21755号 / 工博第4572号 / 新制||工||1713(附属図書館) / 京都大学大学院工学研究科機械理工学専攻 / (主査)教授 小森 雅晴, 教授 松野 文俊, 教授 松原 厚 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
4 |
Design a MIMO printed dipole antenna for 5G sub-band applicationsNajim, H.S., Mosleh, M.F., Abd-Alhameed, Raed 05 November 2022 (has links)
Yes / In this paper, a planar multiple input, multiple output (MIMO) dipole antenna for a future sub-6 GHz 5G application is proposed. The planar MIMO structure consists of 4 antenna elements with an overall size of 150×82×1 mm3. The single antenna element is characterized by a size of 32.5×33.7×1 mm3 printed on an FR-4 dielectric substrate with εr=4.4 and tanδ=0.02. The suggested antenna structure exhibits good impedance bandwidth equal to 3.24 GHz starting from 3.3 to 6.6 GHz with an S11 value of less than -10 dB (S11≤-10 dB) with antenna gain varying from 5.2 up to
7.05 dB in the entire band, which covers all the sub-6 GHz frequency band
of the 5G application. Good isolation is achieved between the MIMO elements due to low surface waves inside the MIMO antenna substrate. The radiation of the MIMO antenna structure can be manipulated and many beam-types can be achieved as desired. The high-frequency structure simulator (HFSS) software package is used to design and simulate the proposed structure, while the CST MWS is used to validate the results.
|
5 |
Convergence in mixed reality-virtuality environments : facilitating natural user behaviorJohansson, Daniel January 2012 (has links)
This thesis addresses the subject of converging real and virtual environments to a combined entity that can facilitate physiologically complying interfaces for the purpose of training. Based on the mobility and physiological demands of dismounted soldiers, the base assumption is that greater immersion means better learning and potentially higher training transfer. As the user can interface with the system in a natural way, more focus and energy can be used for training rather than for control itself. Identified requirements on a simulator relating to physical and psychological user aspects are support for unobtrusive and wireless use, high field of view, high performance tracking, use of authentic tools, ability to see other trainees, unrestricted movement and physical feedback. Using only commercially available systems would be prohibitively expensive whilst not providing a solution that would be fully optimized for the target group for this simulator. For this reason, most of the systems that compose the simulator are custom made to facilitate physiological human aspects as well as to bring down costs. With the use of chroma keying, a cylindrical simulator room and parallax corrected high field of view video see-though head mounted displays, the real and virtual reality are mixed. This facilitates use of real tool as well as layering and manipulation of real and virtual objects. Furthermore, a novel omnidirectional floor and thereto interface scheme is developed to allow limitless physical walking to be used for virtual translation. A physically confined real space is thereby transformed into an infinite converged environment. The omnidirectional floor regulation algorithm can also provide physical feedback through adjustment of the velocity in order to synchronize virtual obstacles with the surrounding simulator walls. As an alternative simulator target use, an omnidirectional robotic platform has been developed that can match the user movements. This can be utilized to increase situation awareness in telepresence applications.
|
6 |
Redesign of the Omnideck platform : With respect to DfA and modularity / Omkostruktion av Omnideck plattformenMed hänsyn : Med hänsyn till DfA och modularitetBrinks, Hanne, Bruins, Mathijs January 2016 (has links)
In this report a product development process is constructed and used to redesign an omnidirectional treadmill, the Omnideck. The current design of the Omnideck platform is designed without regard for assembly. Using modularity and design for assembly theories, incorporated with the product development process, the Omnideck platforms design is improved in respect to assembly time. The original design required 175 labour hours to install. The result is an improved design which requires ten and a half hours to install at a customer. This is achieved by redesigning the Omnideck into individual modules which allow for a faster installation.
|
7 |
Fermeture de boucle pour la cartographie topologique et la navigation avec des images omnidirectionnelles / Loop closure for topological mapping and navigation with omnidirectional imagesKorrapati, Hemanth 03 July 2013 (has links)
Dans le cadre de la robotique mobile, des progrès significatifs ont été obtenus au cours des trois dernières décennies pour la cartographie et la localisation. La plupart des projets de recherche traitent du problème de SLAM métrique. Les techniques alors développées sont sensibles aux erreurs liées à la dérive ce qui restreint leur utilisation à des environnements de petite échelle. Dans des environnements de grande taille, l’utilisation de cartes topologiques, qui sont indépendantes de l’information métrique, se présentent comme une alternative aux approches métriques.Cette thèse porte principalement sur le problème de la construction de cartes topologiques pour la navigation de robots mobiles dans des environnements urbains de grande taille, en utilisant des caméras omnidirectionnelles. La principale contribution de cette thèse est la résolution efficace et avec précision du problème de fermeture de boucles, problème qui est au coeur de tout algorithme de cartographie topologique. Le cadre de cartographie topologique éparse / hiérarchique proposé allie une approche de partionnement de séquence d’images (ISP) par regroupement des images visuellement similaires dans un noeud avec une approche de détection de fermeture de boucles permettant de connecter ces noeux. Le graphe topologique alors obtenu représente l’environnement du robot. L’algorithme de fermeture de boucle hiérarchique développé permet d’extraire dans un premier temps les noeuds semblables puis, dans un second temps, l’image la plus similaire. Cette détection de fermeture de boucles hiérarchique est rendue efficace par le stockage du contenu des cartes éparses sous la forme d’une structure de données d’indexation appelée fichier inversé hiérarchique (HIF). Nous proposons de combiner le score de pondération TFIDF avec des contraintes spatiales et la fréquence des amers détectés pour obtenir une meilleur robustesse de la fermeture de boucles. Les résultats en terme de densité et précision des cartes obtenues et d’efficacité sont évaluées et comparées aux résultats obtenus avec des approches de l’état de l’art sur des séquences d’images omnidirectionnelles acquises en milieu extérieur. Au niveau de la précision des détections de boucles, des résultats similaires ont été observés vis-à-vis des autres approches mais sans étape de vérification utilisant la géométrie épipolaire. Bien qu’efficace, l’approche basée sur HIF présente des inconvénients comme la faible densité des cartes et le faible taux de détection des boucles. Une seconde technique de fermeture de boucle a alors été développée pour combler ces lacunes. Le problème de la faible densité des cartes est causé par un sur-partionnement de la séquence d’images. Celui-ci est résolu en utilisant des vecteurs de descripteurs agrégés localement (VLAD) lors de l’étape de ISP. Une mesure de similarité basée sur une contrainte spatiale spécifique à la structure des images omnidirectionnelles a également été développée. Des résultats plus précis sont obtenus, même en présence de peu d’appariements. Les taux de réussite sont meilleurs qu’avec FABMAP 2.0, la méthode la plus utilisée actuellement, sans étape supplémentaire de vérification géométrique.L’environnement est souvent supposé invariant au cours du temps : la carte de l’environnement est construite lors d’une phase d’apprentissage puis n’est pas modifiée ensuite. Une gestion de la mémoire à long terme est nécessaire pour prendre en compte les modifications dans l’environnement au cours du temps. La deuxième contribution de cette thèse est la formulation d’une approche de gestion de la mémoire visuelle à long terme qui peut être utilisée dans le cadre de cartes visuelles topologiques et métriques. Les premiers résultats obtenus sont encourageants. (...) / Over the last three decades, research in mobile robotic mapping and localization has seen significant progress. However, most of the research projects these problems into the SLAM framework while trying to map and localize metrically. As metrical mapping techniques are vulnerable to errors caused by drift, their ability to produce consistent maps is limited to small scale environments. Consequently, topological mapping approaches which are independent of metrical information stand as an alternative to metrical approaches in large scale environments. This thesis mainly deals with the loop closure problem which is the crux of any topological mapping algorithm. Our main aim is to solve the loop closure problem efficiently and accurately using an omnidirectional imaging sensor. Sparse topological maps can be built by representing groups of visually similar images of a sequence as nodes of a topological graph. We propose a sparse / hierarchical topological mapping framework which uses Image Sequence Partitioning (ISP) to group visually similar images of a sequence as nodes which are then connected on occurrence of loop closures to form a topological graph. A hierarchical loop closure algorithm that can first retrieve the similar nodes and then perform an image similarity analysis on the retrieved nodes is used. An indexing data structure called Hierarchical Inverted File (HIF) is proposed to store the sparse maps to facilitate an efficient hierarchical loop closure. TFIDF weighting is combined with spatial and frequency constraints on the detected features for improved loop closure robustness. Sparsity, efficiency and accuracy of the resulting maps are evaluated and compared to that of the other two existing techniques on publicly available outdoor omni-directional image sequences. Modest loop closure recall rates have been observed without using the epi-polar geometry verification step common in other approaches. Although efficient, the HIF based approach has certain disadvantages like low sparsity of maps and low recall rate of loop closure. To address these shortcomings, another loop closure technique using spatial constraint based similarity measure on omnidirectional images has been proposed. The low sparsity of maps caused by over-partitioning of the input sequence has been overcome by using Vector of Locally Aggregated Descriptors (VLAD) for ISP. Poor resolution of the omnidirectional images causes fewer feature matches in image pairs resulting in reduced recall rates. A spatial constraint exploiting the omnidirectional image structure is used for feature matching which gives accurate results even with fewer feature matches. Recall rates better than the contemporary FABMAP 2.0 approach have been observed without the additional geometric verification. The second contribution of this thesis is the formulation of a visual memory management approach suitable for long term operability of mobile robots. The formulated approach is suitable for both topological and metrical visual maps. Initial results which demonstrate the capabilities of this approach have been provided. Finally, a detailed description of the acquisition and construction of our multi-sensor dataset is provided. The aim of this dataset is to serve the researchers working in the mobile robotics and vision communities for evaluating applications like visual SLAM, mapping and visual odometry. This is the first dataset with omnidirectional images acquired on a car-like vehicle driven along a trajectory with multiple loops. The dataset consists of 6 sequences with data from 11 sensors including 7 cameras, stretching 18 kilometers in a semi-urban environmental setting with complete and precise ground-truth.
|
8 |
Localização e mapeamento simultâneos com auxílio visual omnidirecional. / Simultaneous localization and mapping with omnidirectional vision.Guizilini, Vitor Campanholo 12 August 2008 (has links)
O problema da localização e mapeamento simultâneos, conhecido como problema do SLAM, é um dos maiores desafios que a robótica móvel autônoma enfrenta atualmente. Esse problema surge devido à dificuldade que um robô apresenta ao navegar por um ambiente desconhecido, construindo um mapa das regiões por onde já passou ao mesmo tempo em que se localiza dentro dele. O acúmulo de erros gerados pela imprecisão dos sensores utilizados para estimar os estados de localização e mapeamento impede que sejam obtidos resultados confiáveis após períodos de navegação suficientemente longos. Algoritmos de SLAM procuram eliminar esses erros resolvendo ambos os problemas simultaneamente, utilizando as informações de uma etapa para aumentar a precisão dos resultados alcançados na outra e viceversa. Uma das maneiras de se alcançar isso se baseia no estabelecimento de marcos no ambiente que o robô pode utilizar como pontos de referência para se localizar conforme navega. Esse trabalho apresenta uma solução para o problema do SLAM que faz uso de um sensor de visão omnidirecional para estabelecer esses marcos. O uso de sistemas de visão permite a extração de marcos naturais ao ambiente que podem ser correspondidos de maneira robusta sob diferentes pontos de vista. A visão omnidirecional amplia o campo de visão do robô e com isso aumenta a quantidade de marcos observados a cada instante. Ao ser detectado o marco é adicionado ao mapa que robô possui do ambiente e, ao ser reconhecido, o robô pode utilizar essa informação para refinar suas estimativas de localização e mapeamento, eliminando os erros acumulados e conseguindo mantê-las precisas mesmo após longos períodos de navegação. Essa solução foi testada em situações reais de navegação, e os resultados mostram uma melhora significativa nos resultados alcançados em relação àqueles obtidos com a utilização direta das informações coletadas. / The problem of simultaneous localization and mapping, known as the problem of SLAM, is one of the greatest obstacles that the field of autonomous robotics faces nowadays. This problem is related to a robots ability to navigate through an unknown environment, constructing a map of the regions it has already visited at the same time as localizing itself on this map. The imprecision inherent to the sensors used to collect information generates errors that accumulate over time, not allowing for a precise estimation of localization and mapping when used directly. SLAM algorithms try to eliminate these errors by taking advantage of their mutual dependence and solving both problems simultaneously, using the results of one step to refine the estimatives of the other. One possible way to achieve this is the establishment of landmarks in the environment that the robot can use as points of reference to localize itself while it navigates. This work presents a solution to the problem of SLAM using an omnidirectional vision system to detect these landmarks. The choice of visual sensors allows for the extraction of natural landmarks and robust matching under different points of view, as the robot moves through the environment. The omnidirectional vision amplifies the field of vision of the robot, increasing the number of landmarks observed at each instant. The detected landmarks are added to the map, and when they are later recognized they generate information that the robot can use to refine its estimatives of localization and mapping, eliminating accumulated errors and keeping them precise even after long periods of navigation. This solution has been tested in real navigational situations and the results show a substantial improvement in the results compared to those obtained through the direct use of the information collected.
|
9 |
Desenvolvimento de transdutor de modo SH0 omnidirecional utilizando arranjo de cerâmicas piezoelétricasMenin, Paulo Dambros January 2017 (has links)
A utilização de ondas guiadas em técnicas de monitoramento de integridade estrutural tem se mostrado uma alternativa interessante para economia de tempo e de custos de operação. Técnicas com ondas guiadas apresentam a característica de monitorar trechos extensos de uma estrutura a partir de um único ponto de acesso, permitindo a inspeção de partes remotas e de difícil acesso. Entre os modos fundamentais de propagação de ondas guiadas em chapas, o modo SH0 possui a vantagem de não ser dispersivo, além de sofrer menores atenuações ao se propagar em superfícies em contato com fluidos. Neste trabalho é proposto um modelo de transdutor do modo SH0 de forma omnidirecional em uma chapa de aço utilizando um arranjo com cerâmicas piezoelétricas, que permita inspecionar determinada área de um componente através de um único modo de propagação. Para caracterizar e compreender o comportamento da resposta obtida foram realizadas simulações numéricas parametrizando características do modelo proposto, como o número de elementos piezoelétricos e dimensões geométricas dos componentes do transdutor. Para validação dos resultados do modelo numérico foram construídos modelos experimentais do transdutor, os quais foram instalados sobre uma chapa de aço de 2mm de espessura para verificação dos modos de propagação emitidos. Avaliando a relação entre intensidades de cada modo emitido e a qualidade da emissão de SH0 foi possível identificar as frequências que apresentaram resposta com a característica desejada. Também foi verificado o comportamento dos transdutores construídos ao serem excitados por ondas propagando na chapa, avaliando a sua utilização também como receptores. / The use of guided waves in structural health monitoring techniques has already proved to be an interesting alternative to reduce inspection time and overall operational costs. Guided waves techniques enable monitoring large structures from one single access point, allowing the inspection of distant and hard access parts. Among the fundamental propagation modes of guided waves in plates, the SH0 mode has the advantage of being non dispersive, as well as being subject to less attenuation when propagating through surfaces in contact with fluids. In this work, it is proposed one omnidirectional SH0 mode transducer in a steel plate using a piezoelectric ceramic array, which allows inspecting a component area through a single propagation mode. Numeric simulations parameterizating features of the transducer, such as the number of piezoelectric elements and the geometric dimensions of the transducer’s components, were developed to characterize and investigate the effect on the output response. To validate the numeric simulation results, experimental models of the transducer were built, which were installed on a 2mm thick steel plate to verify the generated propagation modes. The analysis of the relation between the intensity of each mode generated in the plate and the emission quality of the SH0 mode allowed the identification of frequencies that presented the most desirable response characteristics. The behavior of the built transducers was also studied when they were excited by waves propagating through the plate, evaluating the use of the transducers as signal receptors.
|
10 |
Sistema de visão omnidirecional aplicado no controle de robôs móveis. / Omnidirectional vision system applied to mobile robots control.Grassi Júnior, Valdir 07 May 2002 (has links)
Sistemas de visão omnidirecional produzem imagens de 360º do ambiente podendo ser utilizados em navegação, tele-operação e controle servo visual de robôs. Este tipo de sistema dispensa o movimento da câmera para determinada direção de atenção mas requer processamento não convencional da imagem, uma vez que a imagem adquirida se encontra mapeada em coordenadas polares não lineares. Uma maneira efetiva de se obter uma imagem em um sistema omnidirecional é com o uso combinado de lentes e espelhos. Várias formas de espelhos convexos podem ser utilizadas montando-se uma câmera com o seu eixo óptico alinhado com o centro do espelho. Dentre as formas usadas, tem-se os cônicos, parabólicos, hiperbólicos e esféricos. Neste trabalho foi implementado um sistema de visão omnidirecional utilizando um espelho hiperbólico. Este sistema de visão desenvolvido é embarcado em um robô móvel e aplicado em uma tarefa de controle. A tarefa de controle de interesse neste trabalho é a de fazer com que o robô mantenha uma distância constante de um determinado alvo móvel. Esta tarefa é realizada com a realimentação em tempo real de informações visuais do alvo obtidas pelo sistema de visão para controle do robô utilizando uma abordagem de controle servo visual. / Omnidirectional vision systems can get images with a 360-degree of field of view. This type of system is very well suited for tasks such as robotic navigation, tele-operation and visual servoing. Such systems do not require the movement of the camera to the direction of attention of the robot. On the other hand, it requires a non-conventional image processing as the image captured by this vision system is mapped on a non-linear polar coordinate system. One effective way to obtain an image in an omnidirectional system is through the use of lenses and mirrors. Several different shapes of convex mirrors can be used, mounting the center of the mirror aligned with the camera optical axis. The most commonly used mirror shapes are conic, parabolic, hyperbolic and spherical. In this work a hyperbolical mirror was used to build an omnidirectional vision system. This system was mounted on a mobile robot and used in a control task. The task of interest here is the tracking in real time of a moving target keeping the distance between the robot and the target constant. This task is accomplished with data acquisition from the omnidirectional vision system, that is used as feedback to control the mobile robot in a visual servo approach.
|
Page generated in 0.1115 seconds