Spelling suggestions: "subject:"3structure anda motion"" "subject:"3structure ando motion""
51 |
Asservissement visuel coordonné de deux bras manipulateurs / Coordinated visual servoing of two manipulator armsFleurmond, Renliw 17 December 2015 (has links)
Nous nous intéressons ici au problème de la coordination de plusieurs bras manipulateurs au moyen de la vision. Après avoir étudié les approches de commande dédiées à ce problème, notre première contribution a consisté à établir un formalisme basé sur l'asservissement visuel 2D. Ce formalisme permet de bénéficier des images fournies par une ou plusieurs caméras, embarquées ou déportées, pour coordonner les mouvements d'un système robotique multi-bras. Il permet de plus d'exploiter la redondance de ce type de système pour prendre en compte des contraintes supplémentaires. Nous avons ainsi développé une stratégie de commande pour réaliser une tâche de manipulation coordonnée tout en évitant les butées articulaires et la perte des indices visuels. Afin d'aller plus loin et de tolérer les occultations, nous avons proposé des approches permettant de reconstruire la structure des objets manipulés et donc les indices visuels qui les caractérisent. Enfin, nous avons validé nos travaux en simulation et expérimentalement sur le robot PR2. / We address the problem of coordinating a dual arm robot using one or several cameras. After proposing an overview of the control techniques dedicated to this problem, we develop a formalism allowing to coordinate the motions of several arms thanks to multicameras image based visual servoing. Our approach allows to bene?t from the natural redundancy provided by the robotic system to take into account useful constraints such as joint limits and occlusions avoidance. We propose a strategy to deal with these tasks simultaneously. Finally, to make our control more robust with respect to image losses, we reconstruct the structure of the manipulated objects and the corresponding visual features. To validate our approach, we use the formalism to make the dual arm PR2 robot recap a pen. Simulations and experimental results are provided.
|
52 |
Robust Self-Calibration and Fundamental Matrix Estimation in 3D Computer VisionRastgar, Houman January 2013 (has links)
The recent advances in the field of computer vision have brought many of the laboratory algorithms into the realm of industry. However, one problem that still remains open in the field of 3D vision is the problem of noise. The challenging problem of 3D structure recovery from images is highly sensitive to the presence of input data that are contaminated by errors that do not conform to ideal assumptions. Tackling the problem of extreme data, or outliers has led to many robust methods in the field that are able to handle moderate levels of outliers and still provide accurate outputs. However, this problem remains open, especially for higher noise levels and so it has been the goal of this thesis to address the issue of robustness with respect to two central problems in 3D computer vision. The two problems are highly related and they have been presented together within a Structure from Motion (SfM) context. The first, is the problem of robustly estimating the fundamental matrix from images whose correspondences contain high outlier levels. Even though this area has been extensively studied, two algorithms have been proposed that significantly speed up the computation of the fundamental matrix and achieve accurate results in scenarios containing more than 50% outliers. The presented algorithms rely on ideas from the field of robust statistics in order to develop guided sampling techniques that rely on information inferred from residual analysis. The second, problem addressed in this thesis is the robust estimation of camera intrinsic parameters from fundamental matrices, or self-calibration. Self-calibration algorithms are notoriously unreliable for general cases and it is shown that the existing methods are highly sensitive to noise. In spite of this, robustness in self-calibration has received little attention in the literature. Through experimental results, it is shown that it is essential for a real-world self-calibration algorithm to be robust. In order to introduce robustness to the existing methods, three robust algorithms have been proposed that utilize existing constraints for self-calibration from the fundamental matrix. However, the resulting algorithms are less affected by noise than existing algorithms based on these constraints. This is an important milestone since self-calibration offers many possibilities by providing estimates of camera parameters without requiring access to the image acquisition device. The proposed algorithms rely on perturbation theory, guided sampling methods and a robust root finding method for systems of higher order polynomials. By adding robustness to self-calibration it is hoped that this idea is one step closer to being a practical method of camera calibration rather than merely a theoretical possibility.
|
53 |
VYUŽITÍ METOD OPTICKÉHO SKENOVÁNÍ V GEOMORFOLOGICKÝCH ANALÝZÁCH / THE USE OF IMAGE MATCHING METHODS IN GEOMORPHOLOGICAL ANALYSISŠiková, Zuzana January 2015 (has links)
The use of optical scanning methods in geomorpho-analysis Abstract Main goal of this thesis is to find out if it is possible to use Structure from motion (SfM) method for analyzing geomorphological objects. Four geomorphological features in three different places within Pilsen-North region was used for testing this method. These objects with very dissimilar dimensions and shapes was scanned for this testing in various light conditions. All used data-sets was entirely created by author of this thesis. The data was initially processed by Agisoft Photoscan Professional Ediditon v1.1.6 and VisualSFM v0.5.26 to create spatial models. These models was afterwards processed in CloudCompare v2.6.1 and MeshLab v1.3.3. This software was used for clipping and merging of 3D models and for converting 3D models in to real dimensions. These real sized spatial models was then contrasted together by creating comparing entities. Outcomes are evaluated in the thesis conclusion. Keywords: Structure from motion (SfM), SIFT, RANSAC
|
54 |
Resilient visual perception for multiagent systemsKarimian, Arman 15 May 2021 (has links)
There has been an increasing interest in visual sensors and vision-based solutions for single and multi-robot systems. Vision-based sensors, e.g., traditional RGB cameras, grant rich semantic information and accurate directional measurements at a relatively low cost; however, such sensors have two major drawbacks. They do not generally provide reliable depth estimates, and typically have a limited field of view. These limitations considerably increase the complexity of controlling multiagent systems. This thesis studies some of the underlying problems in vision-based multiagent control and mapping.
The first contribution of this thesis is a method for restoring bearing rigidity in non-rigid networks of robots. We introduce means to determine which bearing measurements can improve bearing rigidity in non-rigid graphs and provide a greedy algorithm that restores rigidity in 2D with a minimum number of added edges.
The focus of the second part is on the formation control problem using only bearing measurements. We address the control problem for consensus and formation control through non-smooth Lyapunov functions and differential inclusion. We provide a stability analysis for undirected graphs and investigate the derived controllers for directed graphs. We also introduce a newer notion of bearing persistence for pure bearing-based control in directed graphs.
The third part is concerned with the bearing-only visual homing problem with a limited field of view sensor. In essence, this problem is a special case of the formation control problem where there is a single moving agent with fixed neighbors. We introduce a navigational vector field composed of two orthogonal vector fields that converges to the goal position and does not violate the field of view constraints. Our method does not require the landmarks' locations and is robust to the landmarks' tracking loss.
The last part of this dissertation considers outlier detection in pose graphs for Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM) problems. We propose a method for detecting incorrect orientation measurements before pose graph optimization by checking their geometric consistency in cycles. We use Expectation-Maximization to fine-tune the noise's distribution parameters and propose a new approximate graph inference procedure specifically designed to take advantage of evidence on cycles with better performance than standard approaches.
These works will help enable multi-robot systems to overcome visual sensors' limitations in collaborative tasks such as navigation and mapping.
|
55 |
Optimized 3D Reconstruction for Infrastructure Inspection with Automated Structure from Motion and Machine Learning MethodsArce Munoz, Samuel 09 June 2020 (has links)
Infrastructure monitoring is being transformed by the advancements on remote sensing, unmanned vehicles and information technology. The wide interaction among these fields and the availability of reliable commercial technology are helping pioneer intelligent inspection methods based on digital 3D models. Commercially available Unmanned Aerial Vehicles (UAVs) have been used to create 3D photogrammetric models of industrial equipment. However, the level of automation of these missions remains low. Limited flight time, wireless transfer of large files and the lack of algorithms to guide a UAV through unknown environments are some of the factors that constraint fully automated UAV inspections. This work demonstrates the use of unsupervised Machine Learning methods to develop an algorithm capable of constructing a 3D model of an unknown environment in an autonomous iterative way. The capabilities of this novel approach are tested in a field study, where a municipal water tank is mapped to a level of resolution comparable to that of manual missions by experienced engineers but using $63\%$ . The iterative approach also shows improvements in autonomy and model coverage when compared to reproducible automated flights. Additionally, the use of this algorithm for different terrains is explored through simulation software, exposing the effectiveness of the automated iterative approach in other applications.
|
56 |
Linking glacial erosion and rock type via spectral roughness and spatial patterns of fractures on glaciated bedrock in the Teton Range, Wyoming, USADodson, Zoey January 2018 (has links)
No description available.
|
57 |
Unsupervised Learning for Structure from MotionÖrjehag, Erik January 2021 (has links)
Perception of depth, ego-motion and robust keypoints is critical for SLAM andstructure from motion applications. Neural networks have achieved great perfor-mance in perception tasks in recent years. But collecting labeled data for super-vised training is labor intensive and costly. This thesis explores recent methodsin unsupervised training of neural networks that can predict depth, ego-motion,keypoints and do geometric consensus maximization. The benefit of unsuper-vised training is that the networks can learn from raw data collected from thecamera sensor, instead of labeled data. The thesis focuses on training on imagesfrom a monocular camera, where no stereo or LIDAR data is available. The exper-iments compare different techniques for depth and ego-motion prediction fromprevious research, and shows how the techniques can be combined successfully.A keypoint prediction network is evaluated and its performance is comparedwith the ORB detector provided by OpenCV. A geometric consensus network isalso implemented and its performance is compared with the RANSAC algorithmin OpenCV. The consensus maximization network is trained on the output of thekeypoint prediction network. For future work it is suggested that all networkscould be combined and trained jointly to reach a better overall performance. Theresults show (1) which techniques in unsupervised depth prediction are most ef-fective, (2) that the keypoint predicting network outperformed the ORB detector,and (3) that the consensus maximization network was able to classify outlierswith comparable performance to the RANSAC algorithm of OpenCV.
|
58 |
Mapping with Modern Prosumer Small Unmanned Aircraft Systems: Addressing the Geospatial Accuracy DebateDixon, Madison Palacios 10 August 2018 (has links)
Modern prosumer small unmanned aircraft systems (sUAS) have eliminated many historical barriers to aerial remote sensing and photogrammetric survey data generation. The relatively low cost and operational ease of these platforms has driven their adoption for numerous geospatial applications including professional surveying and mapping. However, significant debate exists among geospatial professionals and academics regarding prosumer sUAS ability to achieve “survey-grade” geospatial accuracy ≤ 0.164 ft. in their derivative survey data. To address this debate, a controlled accuracy test experiment was conducted in accordance with federal standards whereby prosumer sUAS geospatial accuracies were reported between 15.367 ft. – 0.09 ft. horizontally and 496.734 ft. – 0.330 ft. vertically at the 95% confidence level. These results suggest prosumer sUAS derived survey data fall short of “survey-grade” accuracy in this experiment. Therefore, traditional surveying instruments and methods should not be relinquished in favor of prosumer sUAS for complex applications requiring “survey-grade” accuracy at this time.
|
59 |
Intertidal resource cultivation over millennia structures coastal biodiversityCox, Kieran D. 22 December 2021 (has links)
Cultivation of marine ecosystems began in the early Holocene and has contributed vital resources to humans over millennia. Several more recent cultivation practices, however, erode biodiversity. Emerging lines of evidence indicate that certain resource management practices may promote favourable ecological conditions. Here, I use the co-occurrence of 24 First Nations clam gardens, shellfish aquaculture farms, and unmodified clam beaches to test several hypotheses concerning the ecological implications of managing intertidal bivalve populations. To so do, in 2015 and 2016, I surveyed epifaunal (surface) and bivalve communities and quantified each intertidal sites’ abiotic conditions, including sediment characteristics and substrate composition. In 2017, I generated three-dimensional models of each site using structure-from-motion photogrammetry and measured several aspects of habitat complexity. Statistical analyses use a combination of non-parametric multivariate statistics, multivariate regression trees, and random forests to quantify the extent to which the intertidal resource cultivation structures nearshore biodiversity
Chapter 1 outlines a brief history of humanity's use of marine resources, the transition from extracting to cultivating aquatic taxa, and the emergences of the northeast Pacific’s most prevalent shellfish cultivation practices: clam gardens and shellfish farms.
Chapter 2 evaluates the ability of epifaunal community assessment methods to capture species diversity by conducting a paired field experiment using four assessment methods: photo-quadrat, point-intercept, random subsampling, and full-quadrat assessments. Conducting each method concurrently within multiple intertidal sites allowed me to quantify the implications of varying sampling areas, subsampling, and photo surveys on detecting species diversity, abundance, and sample- and coverage-based biodiversity metrics. Species richness, density, and sample-based rarefaction varied between methods, despite assessments occurring at the same locations, with photo-quadrats detecting the lowest estimates and full-quadrat assessments the highest. Abundance estimates were consistent among methods, supporting the use of extrapolation. Coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. The top-performing method, random subsampling, was used to conduct Chapter 4’s surveys.
Chapter 3 examines the connection between shellfish biomass and the ecological conditions clam garden and shellfish farms foster. First, I established the methodological implications of varying sediment volume on the detection of bivalve diversity, abundance, shell length, and sample- and coverage-based biodiversity metrics. Similar to Chapter 2, this examination identified the most suitable method, which I used during the 2015 and 2016 bivalve surveys. The analyses quantified several interactions between each sites’ abiotic conditions and biological communities including, the influence of substrate composition, sediment characteristics, and physical complexity on bivalve communities, and if bivalve richness and habitat complexity facilitates increases in bivalve biomass.
Chapter 4 quantifies the extent to which managing intertidal bivalves enhance habitat complexity, fostering increased diversity in the epifaunal communities. This chapter combines 2015, 2016, and 2017 surveys of the sites' epifaunal communities and habitat complexity metrics, including fractal dimension at four-resolutions and linear rugosity. Clam gardens enhance fine- and broad-scale complexity, while shellfish farms primarily increase fine-scale complexity, allowing for insights into parallel and divergent community responses.
Chapter 5 presents an overview of shellfish as a marine subsidy to coastal terrestrial ecosystems along the Pacific coast of North America. I identified the vectors that transport shellfish-derived nutrients into coastal terrestrial environments, including birds, mammals, and over 13,000 years of marine resource use by local people. I also examined the abundance of shellfish-derived nutrients transported, the prolonged persistence of shellfish subsidies once deposited within terrestrial ecosystems, and the ecological implications for recipient ecosystems.
Chapter 6 contextualizes the preceding chapters relative to the broader literature. The objective is to provide insight into how multiple shellfish cultivation systems influence biological communities, how ecological mechanisms facilitate biotic responses, and summarize the implications for conservation planning, Indigenous resource sovereignty, and biodiversity preservation. It also explores future work, specifically the need to support efforts that pair Indigenous knowledge, and ways of knowing with Western scientific insights to address conservation challenges. / Graduate / 2022-12-13
|
60 |
Benchmarking structure from motion algorithms with video footage taken from a drone against laser-scanner generated 3D modelsMartell, Angel Alfredo January 2017 (has links)
Structure from motion is a novel approach to generate 3D models of objects and structures. The dataset simply consists of a series of images of an object taken from different positions. The ease of the data acquisition and the wide array of available algorithms makes the technique easily accessible. The structure from motion method identifies features in all the images from the dataset, like edges with gradients in multiple directions, and tries to match these features between all the images and then computing the relative motion that the camera was subject to between any pair of images. It builds a 3D model with the correlated features. It then creates a 3D point cloud with colour information of the scanned object. There are different implementations of the structure from motion method that use different approaches to solve the feature-correlation problem between the images from the data set, different methods for detecting the features and different alternatives for sparse reconstruction and dense reconstruction as well. These differences influence variations in the final output across distinct algorithms. This thesis benchmarked these different algorithms in accuracy and processing time. For this purpose, a terrestrial 3D laser scanner was used to scan structures and buildings to generate a ground truth reference to which the structure from motion algorithms were compared. Then a video feed from a drone with a built-in camera was captured when flying around the structure or building to generate the input for the structure from motion algorithms. Different structures are considered taking into account how rich or poor in features they are, since this impacts the result of the structure from motion algorithms. The structure from motion algorithms generated 3D point clouds, which then are analysed with a tool like CloudCompare to benchmark how similar it is to the laser scanner generated data, and the runtime was recorded for comparing it across all algorithms. Subjective analysis has also been made, such as how easy to use the algorithm is and how complete the produced model looks in comparison to the others. In the comparison it was found that there is no absolute best algorithm, since every algorithm highlights in different aspects. There are algorithms that are able to generate a model very fast, managing to scale the execution time linearly in function of the size of their input, but at the expense of accuracy. There are also algorithms that take a long time for dense reconstruction, but generate almost complete models even in the presence of featureless surfaces, like COLMAP modified PatchMacht algorithm. The structure from motion methods are able to generate models with an accuracy of up to \unit[3]{cm} when scanning a simple building, where Visual Structure from Motion and Open Multi-View Environment ranked among the most accurate. It is worth highlighting that the error in accuracy grows as the complexity of the scene increases. Finally, it was found that the structure from motion method cannot reconstruct correctly structures with reflective surfaces, as well as repetitive patterns when the images are taken from mid to close range, as the produced errors can be as high as \unit[1]{m} on a large structure.
|
Page generated in 0.1151 seconds