1 |
The application of size- resolved hygroscopicity measurements to understand the physical and chemical properties of ambient aerosolSantarpia, Joshua Lee 29 August 2005 (has links)
During the summer of 2002, a modified tandem differential mobility analyzer
(TDMA) was used to examine the size-resolved hydration state of the ambient aerosol in
Southeast Texas. Although there were slight variations in the measured properties over
the course of the study, the deliquescent particles observed were almost always present as metastable aqueous solutions. A relative humidity (RH) scanning TDMA system was
used to measure the deliquescence/crystallization properties of ambient aerosol
populations in the same region. During August, sampling was conducted at a rural site in
College Station, and in September at an urban site near the Houston ship channel.
Measurements from both sites indicate cyclical changes in the composition of the soluble fraction of the aerosol, which are not strongly linked to the local aerosol source. The observations show that as temperature increases and RH decreases, the hysteresis loop
describing the RH-dependence of aerosol hygroscopic growth collapses. It is proposed
that this collapse is due to a decrease in the ammonium to sulfate ratio in the aerosol
particles, which coincides with increasing temperature and decreasing RH. This cyclical
change in aerosol acidity may influence secondary organic aerosol (SOA) production and
may exaggerate the impact of the aerosol on human health. The compositional changes
also result in a daily cycle in crystallization RH that is in phase with that of the ambient
RH, which reduces the probability that hygroscopic particles will crystallize in the
afternoon when the ambient RH is a minimum. During June and July of 2004 airborne
measurements of size-resolved aerosol hygroscopic properties were made near Monterey,
California. These were used to examine the change in soluble mass after the aerosol had
been processed by cloud. The calculated change in soluble mass after cloud-processing
ranged from 0.66 g m-3 to 1.40 g m-3. Model calculations showed these values to be
within the theoretical bounds for the aerosols measured. Mass light-scattering efficiencies
were calculated from both an averaged aerosol size distribution and from distributions
modified to reflect the effects of cloud. These calculations show that the increase in mass
light-scattering efficiency should be between 6% and 14%.
|
2 |
Contrasting aerosol refractive index and hygroscopicity in the inflow and outflow of deep convective storms: Analysis of airborne data from DC3Sorooshian, Armin, Shingler, T., Crosbie, E., Barth, M. C., Homeyer, C. R., Campuzano-Jost, P., Day, D. A., Jimenez, J. L., Thornhill, K. L., Ziemba, L. D., Blake, D. R., Fried, A. 27 April 2017 (has links)
We examine three case studies during the Deep Convective Clouds and Chemistry (DC3) field experiment when storm inflow and outflow air were sampled for aerosol subsaturated hygroscopicity and the real part of refractive index (n) with a Differential Aerosol Sizing and Hygroscopicity Probe (DASH-SP) on the NASA DC-8. Relative to inflow aerosol particles, outflow particles were more hygroscopic (by 0.03 based on the estimated parameter) in one of the three storms examined. Two of three control flights with no storm convection reveal higher values, albeit by only 0.02, at high altitude (> 8km) versus < 4km. Entrainment modeling shows that measured values in the outflow of the three storm flights are higher than predicted values (by 0.03-0.11) based on knowledge of values from the inflow and clear air adjacent to the storms. This suggests that other process(es) contributed to hygroscopicity enhancements such as secondary aerosol formation via aqueous-phase chemistry. Values of n were higher in the outflow of two of the three storm flights, reaching as high as 1.54. More statistically significant differences were observed in control flights (no storms) where n decreased from 1.50-1.52 (< 4km) to 1.49-1.50 (> 8km). Chemical data show that enhanced hygroscopicity was coincident with lower organic mass fractions, higher sulfate mass fractions, and higher O:C ratios of organic aerosol. Refractive index did not correlate as well with available chemical data. Deep convection is shown to alter aerosol radiative properties, which has implications for aerosol effects on climate.
|
3 |
An Obstacle Avoidance System for the Visually Impaired Using 3-D Point Cloud ProcessingTaylor, Evan Justin 01 December 2017 (has links)
The long white cane offers many benefits for the blind and visually impaired. Still, many report being injured both indoors and outdoors while using the long white cane. One frequent cause of injury is due to the fact that the long white cane cannot detect obstacles above the waist of the user. This thesis presents a system that attempts to augment the capabilities of the long white cane by sensing the environment around the user, creating a map of obstacles within the environment, and providing simple haptic feedback to the user. The proposed augmented cane system uses the Asus Xtion Pro Live infrared depth sensor to capture the user's environment as a point cloud. The open-source Point Cloud Library (PCL) and Robotic Operating System (ROS) are used to process the point cloud. The points representing the ground plane are extracted to more clearly define potential obstacles. The system determines the nearest point for each 1degree across the horizontal view. These nearest points are recorded as a ROS Laser Scan message and used in a simple haptic feedback system where the rumble feedback is based on two different cost functions. Twenty-two volunteers participated in a user demonstration that showed the augmented cane system can successfully communicate the presence of obstacles to blindfolded users. The users reported experiencing a sense of safety and confidence in the system's abilities. Obstacles above waist height are detected and communicated to the user. The system requires additional development before it could be considered a viable product for the visually impaired.
|
4 |
AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDARHamraz, Hamid 01 January 2018 (has links)
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
|
5 |
3D Semantic SLAM of Indoor Environment with Single Depth Sensor / SLAM sémantique 3D de l'environnement intérieur avec capteur de profondeur simpleGhorpade, Vijaya Kumar 20 December 2017 (has links)
Pour agir de manière autonome et intelligente dans un environnement, un robot mobile doit disposer de cartes. Une carte contient les informations spatiales sur l’environnement. La géométrie 3D ainsi connue par le robot est utilisée non seulement pour éviter la collision avec des obstacles, mais aussi pour se localiser et pour planifier des déplacements. Les robots de prochaine génération ont besoin de davantage de capacités que de simples cartographies et d’une localisation pour coexister avec nous. La quintessence du robot humanoïde de service devra disposer de la capacité de voir comme les humains, de reconnaître, classer, interpréter la scène et exécuter les tâches de manière quasi-anthropomorphique. Par conséquent, augmenter les caractéristiques des cartes du robot à l’aide d’attributs sémiologiques à la façon des humains, afin de préciser les types de pièces, d’objets et leur aménagement spatial, est considéré comme un plus pour la robotique d’industrie et de services à venir. Une carte sémantique enrichit une carte générale avec les informations sur les entités, les fonctionnalités ou les événements qui sont situés dans l’espace. Quelques approches ont été proposées pour résoudre le problème de la cartographie sémantique en exploitant des scanners lasers ou des capteurs de temps de vol RGB-D, mais ce sujet est encore dans sa phase naissante. Dans cette thèse, une tentative de reconstruction sémantisée d’environnement d’intérieur en utilisant une caméra temps de vol qui ne délivre que des informations de profondeur est proposée. Les caméras temps de vol ont modifié le domaine de l’imagerie tridimensionnelle discrète. Elles ont dépassé les scanners traditionnels en termes de rapidité d’acquisition des données, de simplicité fonctionnement et de prix. Ces capteurs de profondeur sont destinés à occuper plus d’importance dans les futures applications robotiques. Après un bref aperçu des approches les plus récentes pour résoudre le sujet de la cartographie sémantique, en particulier en environnement intérieur. Ensuite, la calibration de la caméra a été étudiée ainsi que la nature de ses bruits. La suppression du bruit dans les données issues du capteur est menée. L’acquisition d’une collection d’images de points 3D en environnement intérieur a été réalisée. La séquence d’images ainsi acquise a alimenté un algorithme de SLAM pour reconstruire l’environnement visité. La performance du système SLAM est évaluée à partir des poses estimées en utilisant une nouvelle métrique qui est basée sur la prise en compte du contexte. L’extraction des surfaces planes est réalisée sur la carte reconstruite à partir des nuages de points en utilisant la transformation de Hough. Une interprétation sémantique de l’environnement reconstruit est réalisée. L’annotation de la scène avec informations sémantiques se déroule sur deux niveaux : l’un effectue la détection de grandes surfaces planes et procède ensuite en les classant en tant que porte, mur ou plafond; l’autre niveau de sémantisation opère au niveau des objets et traite de la reconnaissance des objets dans une scène donnée. A partir de l’élaboration d’une signature de forme invariante à la pose et en passant par une phase d’apprentissage exploitant cette signature, une interprétation de la scène contenant des objets connus et inconnus, en présence ou non d’occultations, est obtenue. Les jeux de données ont été mis à la disposition du public de la recherche universitaire. / Intelligent autonomous actions in an ordinary environment by a mobile robot require maps. A map holds the spatial information about the environment and gives the 3D geometry of the surrounding of the robot to not only avoid collision with complex obstacles, but also selflocalization and for task planning. However, in the future, service and personal robots will prevail and need arises for the robot to interact with the environment in addition to localize and navigate. This interaction demands the next generation robots to understand, interpret its environment and perform tasks in human-centric form. A simple map of the environment is far from being sufficient for the robots to co-exist and assist humans in the future. Human beings effortlessly make map and interact with environment, and it is trivial task for them. However, for robots these frivolous tasks are complex conundrums. Layering the semantic information on regular geometric maps is the leap that helps an ordinary mobile robot to be a more intelligent autonomous system. A semantic map augments a general map with the information about entities, i.e., objects, functionalities, or events, that are located in the space. The inclusion of semantics in the map enhances the robot’s spatial knowledge representation and improves its performance in managing complex tasks and human interaction. Many approaches have been proposed to address the semantic SLAM problem with laser scanners and RGB-D time-of-flight sensors, but it is still in its nascent phase. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Time-of-flight cameras have dramatically changed the field of range imaging, and surpassed the traditional scanners in terms of rapid acquisition of data, simplicity and price. And it is believed that these depth sensors will be ubiquitous in future robotic applications. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Starting with a brief motivation in the first chapter for semantic stance in normal maps, the state-of-the-art methods are discussed in the second chapter. Before using the camera for data acquisition, the noise characteristics of it has been studied meticulously, and properly calibrated. The novel noise filtering algorithm developed in the process, helps to get clean data for better scan matching and SLAM. The quality of the SLAM process is evaluated using a context-based similarity score metric, which has been specifically designed for the type of acquisition parameters and the data which have been used. Abstracting semantic layer on the reconstructed point cloud from SLAM has been done in two stages. In large-scale higher-level semantic interpretation, the prominent surfaces in the indoor environment are extracted and recognized, they include surfaces like walls, door, ceiling, clutter. However, in indoor single scene object-level semantic interpretation, a single 2.5D scene from the camera is parsed and the objects, surfaces are recognized. The object recognition is achieved using a novel shape signature based on probability distribution of 3D keypoints that are most stable and repeatable. The classification of prominent surfaces and single scene semantic interpretation is done using supervised machine learning and deep learning systems. To this end, the object dataset and SLAM data are also made publicly available for academic research.
|
6 |
Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning dataVock, Dominik 08 May 2014 (has links) (PDF)
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools.
In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research.
This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment.
To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions.
Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.
|
7 |
Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning dataVock, Dominik 18 December 2013 (has links)
Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools.
In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research.
This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment.
To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions.
Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools.
|
8 |
Probabilistic Multi-Modal Data Fusion and Precision Coordination for Autonomous Mobile Systems Navigation : A Predictive and Collaborative Approach to Visual-Inertial Odometry in Distributed Sensor Networks using Edge Nodes / Sannolikhetsbaserad fermodig datafusion och precision samordning för spårning av autonoma mobila system : En prediktiv och kant-samarbetande metod för visuell-inertial navigation i distribuerade sensornätverkLuppi, Isabella January 2023 (has links)
This research proposes a novel approach for improving autonomous mobile system navigation in dynamic and potentially occluded environments. The research introduces a tracking framework that combines data from stationary sensing units and on-board sensors, addressing challenges of computational efficiency, reliability, and scalability. The work innovates by integrating spatially-distributed LiDAR and RGB-D Camera sensors, with the optional inclusion of on-board IMU-based dead-reckoning, forming a robust and efficient coordination framework for autonomous systems. Two key developments are achieved. Firstly, a point cloud object detection technique, "Generalized L-Shape Fitting”, is advanced, enhancing bounding box fitting over point cloud data. Secondly, a new estimation framework, the Distributed Edge Node Switching Filter (DENS-F), is established. The DENS-F optimizes resource utilization and coordination, while minimizing reliance on on-board computation. Furthermore, it incorporates a short-term predictive feature, thanks to the Adaptive-Constant Acceleration motion model, which utilizes behaviour-based control inputs. The findings indicate that the DENS-F substantially improves accuracy and computational efficiency compared to the Kalman Consensus Filter (KCF), particularly when additional inertial data is provided by the vehicle. The type of sensor deployed and the consistency of the vehicle's path are also found to significantly influence the system's performance. The research opens new viewpoints for enhancing autonomous vehicle tracking, highlighting opportunities for future exploration in prediction models, sensor selection, and precision coordination. / Denna forskning föreslår en ny metod för att förbättra autonom mobil systemsnavigering i dynamiska och potentiellt skymda miljöer. Forskningen introducerar ett spårningsramverk som kombinerar data från stationära sensorenheter och ombordssensorer, vilket hanterar utmaningar med beräkningsefektivitet, tillförlitlighet och skalbarhet. Arbetet innoverar genom att integrera spatialt distribuerade LiDAR- och RGB-D-kamerasensorer, med det valfria tillägget av ombord IMU-baserad dödräkning, vilket skapar ett robust och efektivt samordningsramverk för autonoma system. Två nyckelutvecklingar uppnås. För det första avanceras en punktmolnsobjektdetekteringsteknik, “Generaliserad L-formig anpassning”, vilket förbättrar anpassning av inneslutande rutor över punktmolnsdata. För det andra upprättas ett nytt uppskattningssystem, det distribuerade kantnodväxlingsfltret (DENSF). DENS-F optimerar resursanvändning och samordning, samtidigt som det minimerar beroendet av ombordberäkning. Vidare införlivar det en kortsiktig prediktiv funktion, tack vare den adaptiva konstanta accelerationsrörelsemodellen, som använder beteendebaserade styrentréer. Resultaten visar att DENS-F väsentligt förbättrar noggrannhet och beräknings-efektivitet jämfört med Kalman Consensus Filter (KCF), särskilt när ytterligare tröghetsdata tillhandahålls av fordonet. Den typ av sensor som används och fordonets färdvägs konsekvens påverkar också systemets prestanda avsevärt. Forskningen öppnar nya synvinklar för att förbättra spårning av autonoma fordon, och lyfter fram möjligheter för framtida utforskning inom förutsägelsemodeller, sensorval och precisionskoordinering. / Questa ricerca propone un nuovo approccio per migliorare la navigazione dei sistemi mobili autonomi in ambienti dinamici e potenzialmente ostruiti. La ricerca introduce un sistema di tracciamento che combina dati da unità di rilevazione stazionarie e sensori di bordo, afrontando le sfde dell’effcienza computazionale, dell’affdabilità e della scalabilità. Il lavoro innova integrando sensori LiDAR e telecamere RGB-D distribuiti nello spazio, con l’inclusione opzionale di una navigazione inerziale basata su IMU di bordo, formando un robusto ed effciente quadro di coordinamento per i sistemi autonomi. Vengono raggiunti due sviluppi chiave. In primo luogo, viene perfezionata una tecnica di rilevazione di oggetti a nuvola di punti, “Generalized L-Shape Fitting”, migliorando l’adattamento del riquadro di delimitazione sui dati della nuvola di punti. In secondo luogo, viene istituito un nuovo framework di stima, il Distributed Edge Node Switching Filter (DENS-F). Il DENS-F ottimizza l’utilizzo delle risorse e il coordinamento, riducendo al minimo la dipendenza dal calcolo di bordo. Inoltre, incorpora una caratteristica di previsione a breve termine, grazie al modello di movimento Adaptive-Constant Acceleration, che utilizza input di controllo basati sul comportamento del veicolo. I risultati indicano che il DENS-F migliora notevolmente l’accuratezza e l’effcienza computazionale rispetto al Kalman Consensus Filter (KCF), in particolare quando il veicolo fornisce dati inerziali aggiuntivi. Si scopre anche che il tipo di sensore impiegato e la coerenza del percorso del veicolo infuenzano signifcativamente le prestazioni del sistema. La ricerca apre nuovi punti di vista per migliorare il tracciamento dei veicoli autonomi, evidenziando opportunità per future esplorazioni nei modelli di previsione, nella selezione dei sensori e nel coordinamento di precisione.
|
Page generated in 0.1197 seconds