171 |
Měření deformací komponent motocyklů / Deformation measurement of motorcycle componentsAugste, Jan January 2011 (has links)
The task of this diploma thesis is technical solution for deformation measuring of motorcycle components. By reason of the technical discretion, technology of measurement was changed to another, using image processing. Retrieval which describes fundamental principle of method named photogrammetry and the main conditions of measuring. For a practical testing the carbon composite tube of main motorbike fork was used. It was loaded a with a bending moment. Due to measuring technology was changed, so the design of loading bench was change too. In the end of diploma thesis is appreciation of practical experience discovered in writing of this thesis.
|
172 |
The forensic utility of photogrammetry in surface scene documentationChurch, Elizabeth 09 October 2019 (has links)
In current forensic practice, there are few standards for outdoor crime scene documentation, despite the need for such documentation to be accurate and precise in order to preserve evidence. A potential solution to this is the implementation of image-based photogrammetry. Applied Structure from Motion (SfM) reconstructs models through image point comparisons. A 3D model is produced from a reference photoset that captures a 360-degree view of the subject and the software employs triangulation to match specific points, datums, across individual photos. The datums are arranged into a point-cloud that is then transformed into the final model. Modifying the point-cloud into a final product requires algorithms that adjust the points by building a textured mesh from them. One of the disadvantages of SfM is that the point-cloud can be “noisy,” meaning that the program is unable to distinguish the features of one datum from another due to similarities, creating coverage gaps within the meshed images. To compensate for this, the software can smooth portions of the model in a best-guess process during meshing. As commercial software does not disclose the adjustment algorithms, this documentation technique, while very useful in other disciplines that regularly apply SfM such as archaeology, would fail to meet the standards of the Daubert and Kumho criteria in a forensic setting.
A potential solution to this problem is to use open-source software, which discloses the adjustment algorithms to the user. It was hypothesized that the output of open-sourced software solutions would as accurate as the models produced with commercial software and with total station mapping techniques. To evaluate this hypothesis, a series of mock outdoor crime scenes were documented using SfM and traditional mapping techniques. The scenes included larger surface scatter and small surface scatter scenes. The large surface scatter scenes contained a dispersed set of plastic human remains, and various objects that might reasonably be associated with a crime scene. Ten of these scenes were laid out in 10 x 10 m units in a New England forested environment, each grid with a slightly different composition, and then documented using an electronic total station, data logger and digital camera. The small surface scatter scenes consisted of a pig mandible placed in different environments across two days of data collection. The resulting models were built using PhotoScan by AgiSoft, the commercial software, and MicMac for Mac OSX as the open-source comparison software. Accuracy is only part of the concern however; the full utility of any one of the workflows is defined additionally by the overall cost-effectiveness (affordability and accessibility) and the visual quality of the final model. Accuracy was measured by the amount of variance in fixed-datum measurements that remained consistent across scenes, whereas visual quality of the photogrammetric models were determined by cloud comparison histograms, which allows for comparison of models between software types and across different days of data collection. Histograms were generated using CloudCompare. Not all models that were rendered were useable—90% of large surface scatter models and 87.5% of small surface scatter models were useable.
While there was variance in the metric outputs between the total station and photogrammetric models, the average total variance in fixed-datum lengths for individual scenes was below 0.635 cm for six of the ten scenes. However, only one of the large surface scatter scenes produced measurement that were significantly different between the total station measurements and the software measurement. The maximum differences in measurement between the total station and software measurements were 0.0917 m (PhotoScan) and 0.178 m (MicMac). The minimum difference that was found for either software was 0.000 m, indicating exact measurement. The histograms for the large scatter scenes were comparable, with the commercial and open-source software-derived models having low standard deviations and mean distances between points. For the small surface scatter scenes, the histograms between software types varied depending on the environment and the lighting conditions on the day of data collection. Conditions such as light, ground foliage and topography affect model quality significantly, as well as the amount of available computing power. No such issues of losing objects or limitations of computing power were encountered when mapping by total station and processing the data in AutoCAD. This research shows that SfM has the potential to be a rapid, accurate and low-cost resource for forensic investigation. SfM methodology for outdoor crime scene documentation can be adapted to fit within evidentiary criteria through the use of open-source software and transparent processing, but there are limitations that must be taken into consideration.
|
173 |
DEEP LEARNING-BASED PANICLE DETECTION BY USING HYPERSPECTRAL IMAGERYRuya Xu (9183242) 30 July 2020 (has links)
<div>Sorghum, which is grown internationally as a cereal crop that is robust to heat, drought, and disease, has numerous applications for food, forage, and biofuels. When monitoring the growth stages of sorghum, or phenotyping specific traits for plant breeding, it is important to identify and monitor the panicles in the field due to their impact relative to grain production. Several studies have focused on detecting panicles based on data acquired by RGB and multispectral remote sensing technologies. However, few experiments have included hyperspectral data because of its high dimensionality and computational requirements, even though the data provide abundant spectral information. Relative to analysis approaches, machine learning, and specifically deep learning models have the potential of accommodating the complexity of these data. In order to detect panicles in the field with different physical characteristics, such as colors and shapes, very high spectral and spatial resolution hyperspectral data were collected with a wheeled-based platform, processed, and analyzed with multiple extensions of the VGG-16 Fully Convolutional Network (FCN) semantic segmentation model.</div><div><br></div><div>In order to have correct positioning, orthorectification experiments were also conducted in the study to obtain the proper positioning of the image data acquired by the pushbroom hyperspectral camera at near range. The scale of the DSM derived from LiDAR that was used for orthorectification of the hyperspectral data was determined to be a critical issue, and the application of the Savitzky-Golay filter to the original DSM data was shown to contribute to the improved quality of the orthorectified imagery.</div><div><br></div><div>Three tuned versions of the VGG-16 FCN Deep Learning architecture were modified to accommodate the hyperspectral data: PCA&FCN, 2D-FCN, and 3D-FCN. It was concluded that all the three models can detect the late season panicles included in this study, but the end-to-end models performed better in terms of precision, recall, and the F-score metrics . Future work should focus on improving annotation strategies and the model architecture to detect different panicle varieties and to separate overlapping panicles based on an adequate quantities of training data acquired during the flowering stage.</div>
|
174 |
3D City Models - A Comparative Study of Methods and DatasetsUggla, Gustaf January 2015 (has links)
There are today many available datasets and methods that can be used to create 3D city models, which in turn can be used for numerous applications within the fields of visualization, communication and analysis. The purpose of this thesis is to perform a practical comparison between three methods for 3D city modeling using different combinations of datasets; one using LiDAR data combined with oriented aerial images, one using only oriented aerial images and one using non-oriented aerial images. In all three cases, geometry and textures are derived from the data and the models are imported into the game engine Unity. The three methods are evaluated in terms of the resulting model, the amount of manual work required and the time consumed as well as the cost of data and software licenses. An application example visualizing flooding scenarios in central Stockholm is featured in the thesis to give a simple demonstration of what can be done with 3D city models in a game engine environment. The result of the study shows that combining LiDAR data with oriented images and using a more manual process to create the model gives a higher potential for the result, both in terms of visual appearance and semantic depth. Using only oriented images and commercial software is the easiest and most reliable way to create a usable 3D city model. Non-oriented images and open-source software can be used for 3D reconstruction but is not suited for larger areas or geographic applications. Finding reliable automatic or semi-automatic methods to create semantically rich 3D city models from remote sensed data would be hugely beneficial, as more sophisticated applications could be programmed with the 3D city model as a base.
|
175 |
Photogrammetric software as an alternative to 3D laser scanning in an amateur environmentWarne, Markus January 2015 (has links)
Photogrammetric software today is at a level where it is accessible to the mainstream public and without larger effort is able to reconstruct digital 3D models from photographic input. This thesis investigates the performance of photogrammetricly reconstructed models and evaluates them by comparing the results to their corresponding reconstructed models from a 3D laser scanner with a focus on smaller objects in an amateur environment. The evaluation is performed on four different objects, which are all individually compared to their scanned counterpart. They are compared both with a subjective judgment of quality and by numerically measuring the point-to-point distance on the models. From the results conclusions are drawn that the methods can produce similar results albeit there are many performance factors discovered for a good reconstructions with photogrammetry. The properties of the physical object and the quality of the visual input data stand out as the most important factors.
|
176 |
New Insights Into Prehispanic Urban Organization at Tiwanaku (NE Bolivia): Cross Combined Approach of Photogrammetry, Magnetic Surveys and Previous Archaeological ExcavationsVella, M. A., Ernenwein, E. G., Janusek, J. W., Koons, M., Thiesson, J., Sanchez, C., Guérin, R., Camerlynck, C. 01 February 2019 (has links)
The prehispanic site of Tiwanaku, located in northeastern Bolivia, was the focus of many studies during the past few decades. However, much of the site remains unexplored, leaving many questions unanswered about the location of dense archaeological deposits, the nature of urban organization, and water management strategies—specifically those located in the eastern sector of the Akapana Pyramid. Orthophoto mosaics and Digital Elevation Models derived from drone imagery helped identify archaeological features and anthropogenic mounds. New magnetic survey produced with a cesium gradiometer was merged with previous surveys (fluxgate and cesium gradiometer). The integration of maps and plans from six areas of a previous archaeological investigation within a common Geographical Information System helped relate geophysical anomalies to archaeological features. Our results demonstrate a high level of urban organization associating monumental buildings to open ritual spaces and to densely populated areas during Tiwanaku IV (500–800 CE) and V (800–1100 CE). The complexity of the urban organization is also demonstrated by landscape modifications such as a complex water management system and at least three terraces that augmented the monumentality of the Akapana Pyramid This interdisciplinary approach, innovative in Bolivia, provides new insight into one of the most significant archaeological sites in the Andes.
|
177 |
COMBINING TRADITIONAL AND IMAGE ANALYSIS TECHNIQUES FOR UNCONSOLIDATED EXPOSED TERRIGENOUS BEACH SAND CHARACTERIZATIONUnknown Date (has links)
Traditional sand analysis is labor and cost-intensive, entailing specialized equipment and operators trained in geological analysis. Even a small step to automate part of the traditional geological methods could substantially improve the speed of such research while removing chances of human error. Digital image analysis techniques and computer vision have been well developed and applied in various fields but rarely explored for sand analysis. This research explores capabilities of remote sensing digital image analysis techniques, such as object-based image analysis (OBIA), machine learning, digital image analysis, and photogrammetry to automate or semi-automate the traditional sand analysis procedure. Here presented is a framework combining OBIA and machine learning classification of microscope imagery for use with unconsolidated terrigenous beach sand samples. Five machine learning classifiers (RF, DT, SVM, k-NN, and ANN) are used to model mineral composition from images of ten terrigenous beach sand samples. Digital image analysis and photogrammetric techniques are applied and evaluated for use to characterize sand grain size and grain circularity (given as a digital proxy for traditional grain sphericity). A new segmentation process is also introduced, where pixel-level SLICO superpixel segmentation is followed by spectral difference segmentation and further levels of superpixel segmentation at the object-level. Previous methods of multi-resolution and superpixel segmentation at the object level do not provide the level of detail necessary to yield optimal sand grain-sized segments. In this proposed framework, the DT and RF classifiers provide the best estimations of mineral content of all classifiers tested compared to traditional compositional analysis. Average grain size approximated from photogrammetric procedures is comparable to traditional sieving methods, having an RMSE below 0.05%. The framework proposed here reduces the number of trained personnel needed to perform sand-related research. It requires minimal sand sample preparation and minimizes user-error that is typically introduced during traditional sand analysis. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
|
178 |
Comparative Headstone Analysis and Photogrammetry of Cemeteries in Orange County, Florida.Robinson, Tyra 01 January 2018 (has links)
Headstones manifest an abundance of historic information and embody society's cultural and socioeconomic statuses over time. Cemetery research has been conducted throughout various regions in the United States, but very little has been focused on headstone analysis in the state of Florida. The purpose of this comparative research is to use a typology established by Meyers and Schultz to compare headstone attributes of Orange County, FL and establish a temporal correlation (2012). The analysis of this study has the ability to highlight societal perceptions and ideals surrounding death and mortuary practices while providing a historical context specific to the state of Florida. Data was collected from two cemeteries in Orange County, representing the headstones of 853 individuals. The methodology of this study entailed visiting the cemeteries, photographing headstones, and noting headstone attributes. Following the model set forth in Meyers and Schultz, attributes taken into consideration for this project were stone type, shape, time period, and sex of the individual (2012). In addition to assessing headstone typology for historic cemeteries, the development of best practices for photogrammetry of headstones will be examined. The questions addressed in this research will hopefully illuminate mortuary trends in Central Florida and encourage future research and literature to shift its focus to include southern regions of the United States in terms of historical Cemetery context. Additionally, practices developed in photogrammetry can aid public archaeology conservation and restoration efforts of historic cemeteries that are endangered of being lost due to external circumstances.
|
179 |
Georeferencing Unmanned Aerial Systems Imagery via Registration with Geobrowser Reference ImageryNevins, Robert Pardy January 2017 (has links)
No description available.
|
180 |
Integration of Orbital and Ground Imagery for Automation of Rover LocalizationHwangbo, Ju Won 15 September 2010 (has links)
No description available.
|
Page generated in 0.0264 seconds