• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1850
  • 57
  • 54
  • 38
  • 37
  • 37
  • 19
  • 13
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2668
  • 2668
  • 1104
  • 955
  • 832
  • 608
  • 579
  • 488
  • 487
  • 463
  • 438
  • 432
  • 411
  • 410
  • 373
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

PSF Sampling in Fluorescence Image Deconvolution

Inman, Eric A 01 March 2023 (has links) (PDF)
All microscope imaging is largely affected by inherent resolution limitations because of out-of-focus light and diffraction effects. The traditional approach to restoring the image resolution is to use a deconvolution algorithm to “invert” the effect of convolving the volume with the point spread function. However, these algorithms fall short in several areas such as noise amplification and stopping criterion. In this paper, we try to reconstruct an explicit volumetric representation of the fluorescence density in the sample and fit a neural network to the target z-stack to properly minimize a reconstruction cost function for an optimal result. Additionally, we do a weighted sampling of the point spread function to avoid unnecessary computations and prioritize non-zero signals. In a baseline comparison against the Richardson-Lucy method, our algorithm outperforms RL for images affected with high levels of noise.
332

Efficient CNN-based Object IDAssociation Model for Multiple ObjectTracking

Danesh, Parisasadat January 2023 (has links)
No description available.
333

Deep Time Series Modeling: From Distribution Regularity to Distribution Shift

Fan, Wei 01 January 2023 (has links) (PDF)
Time series data, as a pervasive kind of data format, have played one key role in numerous realworld scenarios. Effective time series modeling can help with accurate forecasting, resource optimization, risk management, etc. Considering its great importance, how can we model the nature of the pervasive time series data? Existing works have used adopted statistics analysis, state space models, Bayesian models, or other machine learning models for time series modeling. However, these methods usually follow certain assumptions and don't reveal the core and underlying rules of time series. Moreover, the recent advancement of deep learning has made neural networks a powerful tool for pattern recognition. This dissertation will target the problem of time series modeling using deep learning techniques to achieve accurate forecasting of time series. I will propose a principled approach for deep time series modeling from a novel distribution perspective. After in-depth exploration, I categorize and study two essential characteristics of time series, i.e., the distribution regularity and the distribution shift, respectively. I will investigate how can time series data involving the two characteristics be analyzed by distribution extraction, distribution scaling, and distribution transformation. By applying more recent deep learning techniques to distribution learning for time series, this defense aims to achieve more effective and efficient forecasting and decision-making. I will carefully illustrate of proposed methods of three themes and summarize the key findings and improvements achieved through experiments. Finally, I will present my future research plan and discuss how to broaden my research of deep time series modeling into a more general Data-Centric AI system for more generalized, reliable, fair, effective, and efficient decision-making.
334

Quantifying the Effects of Permafrost Degradation in Arctic Coastal Environments via Satellite Earth Observation / Quantifizierung der Effekte von Permafrost Degradation in Arktischen Küstenregionen mittels Satelliten-gestützter Erdbeobachtung

Philipp, Marius Balthasar January 2023 (has links) (PDF)
Permafrost degradation is observed all over the world as a consequence of climate change and the associated Arctic amplification, which has severe implications for the environment. Landslides, increased rates of surface deformation, rising likelihood of infrastructure damage, amplified coastal erosion rates, and the potential turnover of permafrost from a carbon sink to a carbon source are thereby exemplary implications linked to the thawing of frozen ground material. In this context, satellite earth observation is a potent tool for the identification and continuous monitoring of relevant processes and features on a cheap, long-term, spatially explicit, and operational basis as well as up to a circumpolar scale. A total of 325 articles published in 30 different international journals during the past two decades were investigated on the basis of studied environmental foci, remote sensing platforms, sensor combinations, applied spatio-temporal resolutions, and study locations in an extensive review on past achievements, current trends, as well as future potentials and challenges of satellite earth observation for permafrost related analyses. The development of analysed environmental subjects, utilized sensors and platforms, and the number of annually published articles over time are addressed in detail. Studies linked to atmospheric features and processes, such as the release of greenhouse gas emissions, appear to be strongly under-represented. Investigations on the spatial distribution of study locations revealed distinct study clusters across the Arctic. At the same time, large sections of the continuous permafrost domain are only poorly covered and remain to be investigated in detail. A general trend towards increasing attention in satellite earth observation of permafrost and related processes and features was observed. The overall amount of published articles hereby more than doubled since the year 2015. New sources of satellite data, such as the Sentinel satellites and the Methane Remote Sensing LiDAR Mission (Merlin), as well as novel methodological approaches, such as data fusion and deep learning, will thereby likely improve our understanding of the thermal state and distribution of permafrost, and the effects of its degradation. Furthermore, cloud-based big data processing platforms (e.g. Google Earth Engine (GEE)) will further enable sophisticated and long-term analyses on increasingly larger scales and at high spatial resolutions. In this thesis, a specific focus was put on Arctic permafrost coasts, which feature increasing vulnerability to environmental parameters, such as the thawing of frozen ground, and are therefore associated with amplified erosion rates. In particular, a novel monitoring framework for quantifying Arctic coastal erosion rates within the permafrost domain at high spatial resolution and on a circum-Arctic scale is presented within this thesis. Challenging illumination conditions and frequent cloud cover restrict the applicability of optical satellite imagery in Arctic regions. In order to overcome these limitations, Synthetic Aperture RADAR (SAR) data derived from Sentinel-1 (S1), which is largely independent from sun illumination and weather conditions, was utilized. Annual SAR composites covering the months June–September were combined with a Deep Learning (DL) framework and a Change Vector Analysis (CVA) approach to generate both a high-quality and circum-Arctic coastline product as well as a coastal change product that highlights areas of erosion and build-up. Annual composites in the form of standard deviation (sd) and median backscatter were computed and used as inputs for both the DL framework and the CVA coastal change quantification. The final DL-based coastline product covered a total of 161,600 km of Arctic coastline and featured a median accuracy of ±6.3 m to the manually digitized reference data. Annual coastal change quantification between 2017–2021 indicated erosion rates of up to 67 m per year for some areas based on 400 m coastal segments. In total, 12.24% of the investigated coastline featured an average erosion rate of 3.8 m per year, which corresponds to 17.83 km2 of annually eroded land area. Multiple quality layers associated to both products, the generated DL-coastline and the coastal change rates, are provided on a pixel basis to further assess the accuracy and applicability of the proposed data, methods, and products. Lastly, the extracted circum-Arctic erosion rates were utilized as a basis in an experimental framework for estimating the amount of permafrost and carbon loss as a result of eroding permafrost coastlines. Information on permafrost fraction, Active Layer Thickness (ALT), soil carbon content, and surface elevation were thereby combined with the aforementioned erosion rates. While the proposed experimental framework provides a valuable outline for quantifying the volume loss of frozen ground and carbon release, extensive validation of the utilized environmental products and resulting volume loss numbers based on 200 m segments are necessary. Furthermore, data of higher spatial resolution and information of carbon content for deeper soil depths are required for more accurate estimates. / Als Folge des Klimawandels und der damit verbundenen „Arctic Amplification“ wird weltweit eine Degradation des Dauerfrostbodens (Permafrost) beobachtet, welche schwerwiegende Auswirkungen auf die Umwelt hat. Erdrutsche, erhöhte Oberflächen- verformungsraten, eine zunehmende Wahrscheinlichkeit von Infrastrukturschäden, verstärkte Küstenerosionsraten und die potenzielle Umwandlung von Permafrost von einer Kohlenstoffsenke in eine Kohlenstoffquelle sind dabei beispielhafte Auswirkun- gen im Zusammenhang mit dem Auftauen von gefrorenem Bodenmaterial. In diesem Kontext ist die Satelliten-gestützte Erdbeobachtung ein wirkmächtiges Werkzeug zur Identifizierung und kontinuierlichen Überwachung relevanter Prozesse und Merkmale auf einer kostengünstigen, langfristigen, räumlich expliziten und operativen Basis und auf einem zirkumpolaren Maßstab. Insgesamt 325 Artikel, die in den letzten zwei Jahrzehnten in 30 verschiedenen internationalen Zeitschriften veröffentlicht wurden, wurden auf Basis der adressierten Umweltschwerpunkte, Fernerkundungsplattformen, Sensorkombinationen, angewand- ten raum-zeitlichen Auflösungen und den Studienorten in einem umfassenden Überblick über vergangene Errungenschaften und aktuelle Trends untersucht. Zusätzlich wur- den zukünftige Potenziale und Herausforderungen der Satelliten-Erdbeobachtung für Permafrost-bezogene Analysen diskutiert. Auf die zeitliche Entwicklung der un- tersuchten Umweltthemen, eingesetzten Sensoren und Satelliten-Plattformen sowie die Zahl der jährlich erscheinenden Artikel wurde detailliert eingegangen. Studien zu atmosphärischen Eigenschaften und Prozessen, wie etwa der Freisetzung von Treibhaus- gasemissionen, waren stark unterrepräsentiert. Deutliche geografische Schlüssel-Gebiete, auf welche sich der Großteil der Studien konzentrierte, konnten in Untersuchungen zur räumlichen Verteilung der Studienorte identifiziert werden. Gleichzeitig sind große Teile des kontinuierlichen Permafrost-Gebiets nur spärlich abgedeckt und müssen noch im Detail untersucht werden. Es wurde ein allgemeiner Trend zu einer zunehmenden Aufmerksamkeit bezüglich der Satelliten-gestützten Erdbeobachtung von Permafrost und verwandten Prozessen und Merkmalen beobachtet. Die Gesamtzahl der veröf- fentlichten Artikel hat sich dabei seit dem Jahr 2015 mehr als verdoppelt. Neue Quellen für Satellitendaten, wie beispielweise die Sentinel-Satelliten und die Methane Remote Sensing LiDAR Mission (Merlin), sowie neuartige methodische Ansätze, wie Datenfusion und Deep Learning, werden dabei voraussichtlich unser Verständnis bzgl. des thermischen Zustands und der Verteilung von Permafrost-Vorkommen sowie die Auswirkungen seines Auftauens verbessern. Darüber hinaus werden Cloud-basierte Big-Data-Verarbeitungsplattformen (z.B. Google Earth Engine (GEE)) anspruchsvolle und langfristige Analysen in immer größeren Maßstäben und mit hoher räumlicher Auflösung erleichtern. In dieser Arbeit wurde ein besonderer Fokus auf arktische Permafrost-Küsten gelegt, die eine zunehmende Vulnerabilität gegenüber Umweltparametern wie dem Auftauen von gefrorenem Boden aufweisen und daher von verstärkten Erosionsraten betroffen sind. Ein neuartiger Ansatz zur Quantifizierung der arktischen Küstene- rosion innerhalb des Permafrost-Gebiets mit hoher räumlicher Auflösung und auf zirkum-arktischem Maßstab wird in dieser Dissertation präsentiert. Schwierige Be- leuchtungsbedingungen und häufige Bewölkung schränken die Anwendbarkeit optischer Satellitenbilder in arktischen Regionen ein. Um diese Einschränkungen zu überwinden, wurden Synthetic Aperture RADAR (SAR) Daten von Sentinel-1 (S1) verwendet, die weitgehend unabhängig von Sonneneinstrahlung und Wetterbedingungen sind. Jährli- che SAR-Komposite, welche die Monate Juni bis September abdecken, wurden mit einem Deep Learning (DL)-Ansatz und einer Change Vector Analysis (CVA)-Methode kombiniert, um sowohl ein qualitativ hochwertiges und zirkum-arktisches Küstenli- nienprodukt als auch ein Produkt für die Änderungsraten (Erosion und küstennahe Aggregation von Sedimenten) der Küste zu generieren. Jährliche Satelliten-Komposite in Form von der Standardabweichung (sd) und des Medians der SAR Rückstreuung wurden hierbei berechnet und als Eingabedaten sowohl für den DL-Ansatz als auch für die Quantifizierung der CVA-basierten Küstenänderung verwendet. Das endgül- tige DL-basierte Küstenlinienprodukt deckt insgesamt 161.600 km der arktischen Küstenlinie ab und wies eine Median-Abweichung von ±6,3 m gegenüber den ma- nuell digitalisierten Referenzdaten auf. Im Zuge der Quantifizierung von jährlichen Küstenveränderungen zwischen 2017 und 2021 konnten Erosionsraten von bis zu 67 m pro Jahr und basierend auf 400 m Küstenabschnitten identifiziert werden. Insgesamt wiesen 12,24% der untersuchten Küstenlinie eine durchschnittliche Erosionsrate von 3,8 m pro Jahr auf, was einer jährlichen erodierten Landfläche von 17,83 km2 entspricht. Mehrere Qualitäts-Datensätze, die beiden Produkten zugeordnet sind, wurden auf Pixelbasis bereitgestellt, um die Genauigkeit und Anwendbarkeit der präsentierten Daten, Methoden und Produkte weiter einordnen zu können. Darüber hinaus wurden die extrahierten zirkum-arktischen Erosionsraten als Grund- lage in einem experimentellen Ansatz verwendet, um die Menge an Permafrost-Verlust und Kohlenstofffreistzung als Konsequenz der erodierten Permafrost-Küsten abzu- schätzen. Dabei wurden Informationen zu Permafrost-Anteil, Active Layer Thickness (ALT), Höhenmodellen und der Menge an im Boden gespeichertem Kohlenstoff mit den oben genannten Erosionsraten kombiniert. Während der präsentierte experimentelle Ansatz einen wertvollen Ausgangspunkt für die Quantifizierung des Volumenverlusts von gefrorenem Boden und der Kohlenstofffreisetzung darstellt, ist eine umfassende Validierung der verwendeten Umweltprodukte und der resultierenden Volumenzah- len erforderlich. Zusätzlich werden für genauere Abschätzungen Daten mit höherer räumlicher Auflösung und Informationen zum Kohlenstoffgehalt für tiefere Bodentiefen benötigt.
335

Deep learning role in scoliosis detection and treatment

Guanche, Luis 29 January 2024 (has links)
Scoliosis is a common skeletal condition in which a curvature forms along the coronal plane of the spine. Although scoliosis has been long recognized, its pathophysiology and best mode of treatment are still debated. Currently, definitive diagnosis of scoliosis and its progression are performed through anterior-posterior (AP) radiographs by measuring the angle of coronal curvature, referred to as Cobb angle. Cobb angle measurements can be performed by Deep Learning algorithms and are currently being investigated as a possible diagnostic tool for clinicians. This thesis focuses on the role of Deep Learning in the diagnosis and treatment of Scoliosis and proposes a study design using the algorithms to continue to better understand and classify the disease.
336

A Comprehensive Analysis of Deep Learning for Interference Suppression, Sample and Model Complexity in Wireless Systems

Oyedare, Taiwo Remilekun 12 March 2024 (has links)
The wireless spectrum is limited and the demand for its use is increasing due to technological advancements in wireless communication, resulting in persistent interference issues. Despite progress in addressing interference, it remains a challenge for effective spectrum usage, particularly in the use of license-free and managed shared bands and other opportunistic spectrum access solutions. Therefore, efficient and interference-resistant spectrum usage schemes are critical. In the past, most interference solutions have relied on avoidance techniques and expert system-based mitigation approaches. Recently, researchers have utilized artificial intelligence/machine learning techniques at the physical (PHY) layer, particularly deep learning, which suppress or compensate for the interfering signal rather than simply avoiding it. In addition, deep learning has been utilized by researchers in recent years to address various difficult problems in wireless communications such as, transmitter classification, interference classification and modulation recognition, amongst others. To this end, this dissertation presents a thorough analysis of deep learning techniques for interference classification and suppression, and it thoroughly examines complexity (sample and model) issues that arise from using deep learning. First, we address the knowledge gap in the literature with respect to the state-of-the-art in deep learning-based interference suppression. To account for the limitations of deep learning-based interference suppression techniques, we discuss several challenges, including lack of interpretability, the stochastic nature of the wireless channel, issues with open set recognition (OSR) and challenges with implementation. We also provide a technical discussion of the prominent deep learning algorithms proposed in the literature and also offer guidelines for their successful implementation. Next, we investigate convolutional neural network (CNN) architectures for interference and transmitter classification tasks. In particular, we utilize a CNN architecture to classify interference, investigate model complexity of CNN architectures for classifying homogeneous and heterogeneous devices and then examine their impact on test accuracy. Next, we explore the issues with sample size and sample quality with regards to the training data in deep learning. In doing this, we also propose a rule-of-thumb for transmitter classification using CNN based on the findings from our sample complexity study. Finally, in cases where interference cannot be avoided, it is important to suppress such interference. To achieve this, we build upon autoencoder work from other fields to design a convolutional neural network (CNN)-based autoencoder model to suppress interference thereby ensuring coexistence of different wireless technologies in both licensed and unlicensed bands. / Doctor of Philosophy / Wireless communication has advanced a lot in recent years, but it is still hard to use the limited amount of available spectrum without interference from other devices. In the past, researchers tried to avoid interference using expert systems. Now, researchers are using artificial intelligence and machine learning, particularly deep learning, to mitigate interference in a different way. Deep learning has also been used to solve other tough problems in wireless communication, such as classifying the type of device transmitting a signal, classifying the signal itself or avoiding it. This dissertation presents a comprehensive review of deep learning techniques for reducing interference in wireless communication. It also leverages a deep learning model called convolutional neural network (CNN) to classify interference and investigates how the complexity of the CNN effects its performance. It also looks at the relationship between model performance and dataset size (i.e., sample complexity) in wireless communication. Finally, it discusses a CNN-based autoencoder technique to suppress interference in digital amplitude-phase modulation system. All of these techniques are important for making sure different wireless technologies can work together in both licensed and unlicensed bands.
337

Multi-Template Temporal Siamese Network for Visual Object Tracking

Sekhavati, Ali 04 January 2023 (has links)
Visual object tracking is the task of giving a unique ID to an object in a video frame, understanding whether it is present or not in a current frame and if it is present, precisely localizing its position. There are numerous challenges in object tracking, such as change of illumination, partial or full occlusion, change of target appearance, blurring caused by camera movement, presence of similar objects to the target, changes in video image quality through time, etc. Due to these challenges, traditional computer vision techniques cannot perform high-quality tracking, especially for long-term tracking. Almost all the state-of-the-art methods in object tracking use artificial intelligence nowadays, and more specifically, Convolutional Neural Networks. In this work, we present a Siamese based tracker which is different from previous works in two ways. Firstly, most of the Siamese based trackers takes the target in the first frame as the ground truth. Despite the success of such methods in previous years, it does not guarantee robust tracking as it cannot handle many of the challenges causing change in target appearance, such as blurring caused by camera movement, occlusion, pose variation, etc. In this work, while keeping the first frame as a template, we add five other additional templates that are dynamically updated and replaced considering target classification score in different frames. Diversity, similarity and recency are criteria to choose the members of the bag. We call it as a bag of dynamic templates. Secondly, many Siamese based trackers are vulnerable to mistakenly tracking another similar looking object instead of the intended target. Many researchers proposed computationally expensive approaches, such as tracking all the distractors and the given target and discriminate them in every frame. In this work, we propose an approach to handle this issue by estimate the next frame position by using the target's bounding box coordinates in previous frames. We use temporal network with past history of several previous frames, measure classification scores of candidates considering templates in the bag of dynamic templates and use tracker sequential confidence value which shows how confident the tracker has been in previous frames. We call it as robustifier that prevents the tracker from continuously switching between the target and possible distractors with this hypothesis in mind. Extensive experiments on OTB 50, OTB 100 and UAV20L datasets demonstrate the superiority of our work over the state-of-the-art methods.
338

ACOUSTIC EMISSION MONITORING OF THE POWDER BED FUSION PROCESS WITH MACHINE LEARNING APPROACH

Ghayoomi Mohammadi, Mohammad January 2021 (has links)
Laser powder bed fusion (L-PBF) is an additive manufacturing process where a heat source (such as a laser) consolidates material in powder form to build three-dimensional parts. For quality control purposes, this thesis uses real-time monitoring in L-PBF. Defects such as pores and cracks can be detected using Acoustic Emission (AE) during the powder bed selective laser melting process via the machine learning approach. This thesis investigates the performance of several Machine Learning (ML) techniques for online defect detection within the Laser Powder Bed Fusion (L- PBF) process. The goal is to improve the consistency in product quality and process reliability. The application of acoustic emission (AE) sensors to receive elastic waves during the printing process is a cost-effective way of meeting such a goal. For the first step, stainless steel 316L was produced via eight parameters. The acoustic emission signals received during the printing and data collection steps are analyzed using an AE sensor under various process parameters. Several time and frequency-domain features were extracted from data during the mining process from the AE signals. K-means clustering is employed during unsupervised learning, and a neural network approach was used for the supervised machine learning on the dataset. Data labelling is conducted for different laser powers, clustering results, and signal time durations. The results showed the potential of real-time quality monitoring using AE in the L-PBF process. Some process parameters within this project were intentionally adjusted to create three various levels of defects in H13 tool steel samples. First classes were printed with minimum defects, second classes with intentional cracks, and last classes with intentional cracks and porosities. AE signals were acquired during the samples' manufacturing process. Three different machine learning (ML) techniques were applied to analyze and interpret the data. First, using a hierarchical K-means clustering method, the data was labelled. This was followed by a supervised deep learning neural network (DL) to match acoustic signals with defect type. Second, a principal component analysis (PCA) was used to reduce the dimensionality of the data. A Gaussian Mixture Model (GMM) enabled the fast detection of defects, which is suitable for online monitoring. Third, a variational auto-encoder (VAE) approach was used to obtain a general feature of the signal, which could be used as an input for the classifier. Quality trends in AE signals collected from 316L samples were successfully detected using a supervised DL trained on the H13 tool steel dataset. The VAE approach shows a new method for detecting defects within the L-PBF processes, which would eliminate the need for model training in different materials. / Thesis / Master of Applied Science (MASc)
339

Predicting Transfer Learning Performance Using Dataset Similarity for Time Series Classification of Human Activity Recognition / Transfer Learning Performance Using Dataset Similarity on Realtime Classification

Clark, Ryan January 2022 (has links)
Deep learning is increasingly becoming a viable way of classifying all types of data. Modern deep learning algorithms, such as one dimensional convolutional neural networks, have demonstrated excellent performance in classifying time series data because of the ability to identify time invariant features. A primary challenge of deep learning for time series classification is the large amount of data required for training and many application domains, such as in medicine, have challenges obtaining sufficient data. Transfer learning is a deep learning method used to apply feature knowledge from one deep learning model to another; this is a powerful tool when both training datasets are similar and offers smaller datasets the power of more robust larger datasets. This makes it vital that the best source dataset is selected when performing transfer learning and presently there is no metric for this purpose. In this thesis a metric of predicting the performance of transfer learning is proposed. To develop this metric this research will focus on classification and transfer learning for human-activity-recognition time series data. For general time series data, finding temporal relations between signals is computationally intensive using non-deep learning techniques. Rather than time-series signal processing, a neural network autoencoder was used to first transform the source and target datasets into a time independent feature space. To compare and quantify the suitability of transfer learning datasets, two metrics were examined: i) average embedded signal from each dataset was used to calculate the distance between each datasets centroid, and ii) a Generative Adversarial Network (GAN) model was trained and the discriminator portion of the GAN is then used to assess the dissimilarity between source and target. This thesis measures a correlation between the distance between two dataset and their similarity, as well as the ability for a GAN to discriminate between two datasets and their similarity. The discriminator metric, however, does suffer from an upper limit of dissimilarity. These metrics were then used to predict the success of transfer learning from one dataset to another for the purpose of general time series classification. / Thesis / Master of Applied Science (MASc) / Over the past decade, advances in computational power and increases in data quantity have made deep learning a useful method of complex pattern recognition and classification in data. There is a growing desire to be able to use these complex algorithms on smaller quantities of data. To achieve this, a deep learning model is first trained on a larger dataset and then retrained on the smaller dataset; this is called transfer learning. For transfer learning to be effective, there needs to be a level of similarity between the two datasets so that properties from larger dataset can be learned and then refined using the smaller dataset. Therefore, it is of great interest to understand what level of similarity exists between the two datasets. The goal of this research is to provide a similarity metric between two time series classification datasets so that potential performance gains from transfer learning can be better understood. The measure of similarity between two time series datasets presents a unique challenge due to the nature of this data. To address this challenge an encoder approach was implemented to transform the time series data into a form where each signal example can be compared against one another. In this thesis, different similarity metrics were evaluated and correlated to the performance of a deep learning model allowing the prediction of how effective transfer learning may be when applied.
340

Sensor capture and point cloud processing for off-road autonomous vehicles

Farmer, Eric D 01 May 2020 (has links)
Autonomous vehicles are complex robotic and artificial intelligence systems working together to achieve safe operation in unstructured environments. The objective of this work is to provide a foundation to develop more advanced algorithms for off-road autonomy. The project explores the sensors used for off-road autonomy and the data capture process. Additionally, the point cloud data captured from lidar sensors is processed to restore some of the geometric information lost during sensor sampling. Because ground truth values are needed for quantitative comparison, the MAVS was leveraged to generate a large off-road dataset in a variety of ecosystems. The results demonstrate data capture from the sensor suite and successful reconstruction of the selected geometric information. Using this geometric information, the point cloud data is more accurately segmented using the SqueezeSeg network.

Page generated in 0.1093 seconds