• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 466
  • 51
  • 49
  • 21
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 8
  • 6
  • 4
  • 3
  • Tagged with
  • 731
  • 117
  • 89
  • 87
  • 86
  • 83
  • 75
  • 74
  • 73
  • 67
  • 65
  • 64
  • 64
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Punktwolken von Handscannern und ihr Potenzial

Martienßen, Thomas 16 July 2019 (has links)
Der Beitrag beschäftigt sich mit dem Handscanner ZEB-REVO der Firma GeoSLAM. Es werden die Handhabung der Hardware im untertägigen Einsatz und die Weiterverarbeitung der Punktwolken für Anwendungen im Bergbau näher betrachtet. Die Notwendigkeit der Referenzierung der Punktwolken und eine Möglichkeit diese umzusetzen, werden dargelegt. Über den Vergleich der Daten mit Punktwolken von terrestrischen Laserscannern der Firma Riegl in der Software RiScanPro werden Genauigkeitsuntersuchungen angestellt, die dem Anwender die Grenzen des Systems aufzeigen. Schließlich führen die angestellten Untersuchungen zu einer kritischen Bewertung des Systems. / This contribution addresses practical aspects, abilities and limitations in using the ZEBREVO hand-held scanner from GeoSLAM for underground mine mapping. Besides mapping activities, also post-processing of generated point clouds and requirements for georeferencing are discussed. An accuracy assessment is presented by the means of a point cloud comparison, generated by a terrestrial laser scanner from Riegl. Results demonstrate the technical ability and also the limitations of the system ZEB-REVO. Concluding, a critical evaluation of the system is presented.
522

Comparison of simulated and observed horizontal inhomogeneities of optical thickness of Arctic stratus

Schäfer, Michael, Loewe, Katharina, Ehrlich, André, Hoose, Corinna, Wendisch, Manfred 15 March 2021 (has links)
Two-dimensional horizontal fields of cloud optical thickness derived from airborne measurements of solar spectral, reflected radiance are compared with semiidealized large eddy simulations (LESs) of Arctic stratus performed with the Consortium for Small-scale Modeling (COSMO) atmospheric model. The measurements were collected during the Vertical Distribution of Ice in Arctic Clouds (VERDI) campaign carried out in Inuvik, Canada, in April/May 2012. The input for the LESs is obtained from collocated dropsonde observations of a persistent Arctic stratus above the sea-icefree Beaufort Sea. Simulations are performed for spatial resolutions of 50 m (1.6 km by 1.6 km domain) and 100 m (6.4 km by 6.4 kmdomain). Macrophysical cloud properties, such as cloud top altitude and vertical extent, are well captured by the COSMO simulations. However, COSMO produces rather homogeneous clouds compared to the measurements, in particular for the simulations with coarser spatial resolution. For both spatial resolutions, the directional structure of the cloud inhomogeneity is well represented by the model. This study was first published by Schäfer et al., 2018. / Zweidimensionale horizontale Felder optischer Dicken abgeleitet aus flugzeuggetragenen Messungen der spektralen, solaren, reflektierten Strahldichte über Arktischem Stratus werden mit teilidealisierten Large Eddy Simulationen (LES) im Atmosphärenmodel des Consortium for Small-scale Modeling (COSMO) verglichen. Die Messungen stammen von der Vertical Distribution of Ice in Arctic Clouds (VERDI) Kampagne in Inuvik, Kanada, im April/Mai 2012. Fallsonden- Beobachtungen eines beständigen arktischen Stratus über dem eisfreien Beaufort Meer bilden die LES-Eingangsdaten. Die Simulationen wurden mit räumlichen Auflösungen von 50 m (1.6 km  1.6 km Gebiet) und 100 m (6.4 km  6.4 km Gebiet) durchgeführt. Makroskopische Wolkeneigenschaften (Wolkenhöhe, -ausdehnung) wurden von COSMO erfasst. Allerdings produziert COSMO verglichen zu den Beobachtungen (besonders bei grober räumlicher Auflösung) eher homogenere Wolken. Gerichtete Strukturen der Inhomogenitäten wurden mit beiden räumlichen Auflösungen gut erfasst. Diese Studie wurde als erstes von Schäfer et al., 2018 veröffentlicht.
523

A Survey of Far Ultraviolet Spectroscopic Explorer and Hubble Space Telescope Sight Lines Through High-Velocity Cloud Complex C

Collins, Joseph A., Shull, J. Michael, Giroux, Mark L. 01 March 2003 (has links)
Using archival Far Ultraviolet Spectroscopic Explorer (FUSE) and Hubble Space Telescope (HST) data, we have assembled a survey of eight sight lines through high-velocity cloud Complex C. Abundances of the observed ion species vary significantly for these sight lines, indicating that Complex C is not well characterized by a single metallicity. Reliable metallicities based on [O I/H I] range from 0.1 to 0.25 Z⊙. Metallicities based on [S II/H I] range from 0.1 to 0.6 Z⊙, but the trend of decreasing abundance with H I column density indicates that photoionization corrections may affect the conversion to [S/H]. We present models of the dependence of the ionization correction on H I column density; these ionization corrections are significant when converting ion abundances to elemental abundances for S, Si, and Fe. The measured abundances in this survey indicate that parts of the cloud have a higher metallicity than previously thought and that Complex C may represent a mixture of " Galactic fountain " gas with infalling low-metallicity gas. We find that [S/O] and [Si/O] have a solar ratio, suggesting little dust depletion. Further, the measured abundances suggest an overabundance of O, S, and Si relative to N and Fe. The enhancement of these α-elements suggests that the bulk of the metals in Complex C were produced by Type II supernovae and then removed from the starforming region, possibly via supernova-driven winds or tidal stripping, before the ISM could be enriched by N and Fe.
524

Metallicity and Ionization in High-Velocity Cloud Complex C

Collins, Joseph A., Shull, J. Michael, Giroux, Mark L. 01 March 2007 (has links)
We analyze HST and FUSE ultraviolet spectroscopic data for 11 sight lines passing through the infalling high-velocity cloud (HVC) Complex C. These sight lines pass through regions with H I column densities in the range N H I = 1018.1 -1020.1 cm-2. From [OI/HI] abundances, we find that Complex C metallicities range from 0.09 to 0.29 Z⊙, with a column density weighted mean of 0.13 Z ⊙. Nitrogen (N I) is underabundant by factors of (0.01 -0.07)(N/H)⊙, significantly less than oxygen relative to solar abundances. This pattern suggests nucleosynthetic enrichment by Type II SNe, consistent with an origin in the Galactic fountain or infalling gas produced in winds from Local Group galaxies. The range of metallicity and its possible (2 σ) dependence on NH I could indicate some mixing of primordial material with enriched gas from the Milky Way, but the mixing mechanism is unclear. We also investigate the significant highly ionized component of Complex C, detected in C IV, Si IV, and O VI, but not in N V. High-ion column density ratios show little variance and are consistent with shock ionization or ionization at interfaces between Complex C and a hotter surrounding medium. Evidence for the former mechanism is seen in the Mrk 876 line profiles, where the offset in line centroids between low and high ions suggests a decelerating bow shock.
525

Highly Ionized High-Velocity Clouds: Hot Intergalactic Medium or Galactic Halo?

Collins, Joseph A., Michael Shull, J., Giroux, Mark L. 10 April 2005 (has links)
We use spectroscopic data from the Hubble Space Telescope (HST) and Far Ultraviolet Spectroscopic Explorer (FUSE) to study the wide range of ionization states of the "highly ionized high-velocity clouds" (HVCs). Studied extensively in O VI absorption, these clouds are usually assumed to be infalling gas in the Galactic halo at distances less than 50 kpc. An alternative model attributes the O VI (and O VII X-ray absorption) to cosmological structures of low-density, shock-heated intergalactic gas, distributed over 1-3 Mpc surrounding the Milky Way. The latter interpretation is unlikely, owing to the enormous required mass of gas (4 × 1012 M⊙). Our detection, in 9 of 12 sight lines, of low-ionization stages (C II/III/IV; Si II/III/IV) at similar high velocities as O vi requires gas densities far above that (nH ≈ 5 × 10-6 cm-3) associated with the warm-hot intergalactic medium (WHIM). These HVCs are probably cooling, multiphase gas in the Galactic halo, bow shocks, and interfaces between clouds falling through a hot, rotating gaseous halo. The velocity segregation of these HVCs in Galactic coordinates is consistent with a pattern in which infalling clouds reflect the sense of Galactic rotation, with peculiar velocities superposed.
526

Highly Ionized High-Velocity Clouds Toward PKS 2155-304 and Markarian 509

Collins, Joseph A., Shull, J. Michael, Giroux, Mark L. 10 April 2004 (has links)
To gain insight into four highly ionized high-velocity clouds (HVCs) discovered by Sembach et al., we have analyzed data from the Hubble Space Telescope (HST) and Far Ultraviolet Spectroscopic Explorer (FUSE) for the PKS 2155-304 and Mrk 509 sight lines. We measure strong absorption in O VI and column densities of multiple ionization stages of silicon (Si II, III, and IV) and carbon (C II, III, and IV). We interpret this ionization pattern as a multiphase medium that contains both collisionally ionized and photoionized gas. Toward PKS 2155-304, for HVCs at -140 and -270 km s-1, respectively, we measure logN(O VI) = 13.80 ± 0.03 and logN(O VI) = 13.56 ± 0.06; from Lyman series absorption, we find logN(H I) = 16.37 -0.14+0.22 and 15.23-0.22+0.38. The presence of high-velocity O VI spread over a broad (100 km s-1) profile, together with large amounts of low-ionization species, is difficult to reconcile with the low densities, ne ≈ 5 × 10-6 cm-3, in the collisional/photoionization models of Nicastro et al., although the HVCs show a similar relation in N(Si IV)/N(C IV) versus N(C II)/N(C IV) to that of high-z intergalactic clouds. Our results suggest that the high-velocity O VI in these absorbers does not necessarily trace the warm-hot intergalactic medium but instead may trace HVCs with low total hydrogen column density. We propose that the broad high-velocity O VI absorption arises from shock ionization, at bow shock interfaces produced from infalling clumps of gas with velocity shear. The similar ratios of high ions for HVC Complex C and these highly ionized HVCs suggest a common production mechanism in the Galactic halo.
527

Efficient Realistic Cloud Rendering using the Volumetric Rendering Technique : Science, Digital Game Development

Bengtsson, Adam January 2022 (has links)
With high quality in graphics being demanded a lot in modern video games, realistic clouds are noexception. In many video games, it is common that its rendering implementation is based on acollection of 2D cloud-images rendered into the scene. Through previously published work, it was found that while other techniques can be more appropriate depending on the project, volumetricrendering is the highest state-of-the-art in cloud rendering. The only lacking feature of this techniqueis the performance rate, as it is a very expensive technique. Two general problems regarding theperformance rate is that either the high quality of the clouds is not applicable to real-time rendering orthe quality has been pushed back to the point where the clouds lacked accuracy or realism in shape. There are three basic objectives to the project that were forumulated so that the aim can be completed. The objectives are listed as the following to satisfy the aim: Aim: Create a cloud generator with the volumetric rendering technique Objective 1: Create a 3D engine in OpenGL that generates clouds with volumetric rendering in real-time. Objective 2: Create different scenes that increase computational cost for the computer to render. Objective 3: Arrange tests across different computers running the engine and document the results in terms of performance. The project is created using the programming language C++ and the OpenGL library in Visual Studio. The code comes from a combination of other previously made projects regarding the subject ofrendering clouds in real-time. In order to save time in the project, two projects created by FedericoVaccaro and Sébastien Hillaire were used as references in order to quickly reach a solid foundation for experimenting with the performance rate of volumetric clouds. The resulting cloud implementation contains three of many cloud types and updates in real-time. It is possible to configure the clouds in real-time and have the density, coverage, light absorption and more be altered to generate between the three different cloud types. When changing the settings for the boxcontaining the clouds, as well as coloring and changing the position of the clouds and global light, the clouds updates in real-time. To conclude the project, rendering the clouds at the goal of above 60 FPS if only limiting the resultsdown to high-end computer was somewhat successful. The clouds visually looked realistic enough inthe scene and the efforts for improving the performance rate did not affect its overall quality. The high-end computer was able to render the clouds but the low-end computer was struggling with theclouds on their own
528

Domain adaptation from 3D synthetic images to real images

Manamasa, Krishna Himaja January 2020 (has links)
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
529

Apprentissage de nouvelles représentations pour la sémantisation de nuages de points 3D / Learning new representations for 3D point cloud semantic segmentation

Thomas, Hugues 19 November 2019 (has links)
Aujourd’hui, de nouvelles technologies permettent l’acquisition de scènes 3D volumineuses et précises sous la forme de nuages de points. Les nouvelles applications ouvertes par ces technologies, comme les véhicules autonomes ou la maintenance d'infrastructure, reposent sur un traitement efficace des nuages de points à grande échelle. Les méthodes d'apprentissage profond par convolution ne peuvent pas être utilisées directement avec des nuages de points. Dans le cas des images, les filtres convolutifs ont permis l’apprentissage de nouvelles représentations, jusqu’alors construites « à la main » dans les méthodes de vision par ordinateur plus anciennes. En suivant le même raisonnement, nous présentons dans cette thèse une étude des représentations construites « à la main » utilisées pour le traitement des nuages de points. Nous proposons ainsi plusieurs contributions, qui serviront de base à la conception d’une nouvelle représentation convolutive pour le traitement des nuages de points. Parmi elles, une nouvelle définition de voisinages sphériques multi-échelles, une comparaison avec les k plus proches voisins multi-échelles, une nouvelle stratégie d'apprentissage actif, la segmentation sémantique des nuages de points à grande échelle, et une étude de l'influence de la densité dans les représentations multi-échelles. En se basant sur ces contributions, nous introduisons la « Kernel Point Convolution » (KPConv), qui utilise des voisinages sphériques et un noyau défini par des points. Ces points jouent le même rôle que les pixels du noyau des convolutions en image. Nos réseaux convolutionnels surpassent les approches de segmentation sémantique de l’état de l’art dans presque toutes les situations. En plus de ces résultats probants, nous avons conçu KPConv avec une grande flexibilité et une version déformable. Pour conclure notre réflexion, nous proposons plusieurs éclairages sur les représentations que notre méthode est capable d'apprendre. / In the recent years, new technologies have allowed the acquisition of large and precise 3D scenes as point clouds. They have opened up new applications like self-driving vehicles or infrastructure monitoring that rely on efficient large scale point cloud processing. Convolutional deep learning methods cannot be directly used with point clouds. In the case of images, convolutional filters brought the ability to learn new representations, which were previously hand-crafted in older computer vision methods. Following the same line of thought, we present in this thesis a study of hand-crafted representations previously used for point cloud processing. We propose several contributions, to serve as basis for the design of a new convolutional representation for point cloud processing. They include a new definition of multiscale radius neighborhood, a comparison with multiscale k-nearest neighbors, a new active learning strategy, the semantic segmentation of large scale point clouds, and a study of the influence of density in multiscale representations. Following these contributions, we introduce the Kernel Point Convolution (KPConv), which uses radius neighborhoods and a set of kernel points to play the role of the kernel pixels in image convolution. Our convolutional networks outperform state-of-the-art semantic segmentation approaches in almost any situation. In addition to these strong results, we designed KPConv with a great flexibility and a deformable version. To conclude our argumentation, we propose several insights on the representations that our method is able to learn.
530

Développement d'un modèle microphysique de nuages pour un modèle de climat global vénusien / Development of a microphysical cloud model for the Venus Global Climate Model

Guilbon, Sabrina 27 April 2018 (has links)
Les conditions à la surface de Vénus sont infernales : température de plus de 400 C, pression atmosphérique 90 fois celle sur Terre dans une atmosphère composée à 96 % de dioxyde de carbone. Une particularité de cette planète est la couche opaque nuageuse de 20 km d'épaisseur qui couvre toute la planète. Les nuages ont un rôle crucial pour le transfert de rayonnement, la dynamique atmosphérique, dans le cycle de certaines espèces chimiques comme le soufre et plus généralement pour le climat de Vénus. Malgré de nombreuses missions spatiales consacrées à cet astre depuis 1961, il y a peu de mesures in-situ. Les couches basses des nuages sont diciles à étudier par satellite, par conséquent il existe encore de nombreuses questions au sujet des nuages : leurs propriétés et leurs impacts radiatifs, dynamiques et chimiques sont mal contraints. Composées majoritairement d'acide sulfurique en solution, les particules sont supposées sphériques et liquides et composent des nuages étalés verticalement entre 50 et 70 km d'altitude environ, entourés par des brumes entre 30 et 50 km et au-dessus de 70 km. Les gouttelettes ont été classées, d'après des observations, en trois modes en fonction de leur taille et de leur composition : les modes 1 et 2 respectivement pour les petites (r = 0.2 μm) et moyennes particules (r = 1.0 μm), et un troisième mode qui contiendrait les plus grandes particules (r = 3.5 μm). Ce dernier mode, qui a été détecté par la sonde Pioneer Venus, demeure de composition et d'existence incertaines, et il n'est pas pris en compte dans notre étude. Afin de compléter et de mieux comprendre les données obtenues par l'observation spatiale, un modèle modal de microphysique, nommé MADMuphy (Modal Aerosol Dynamics with Microphysics), a été développé. L'objectif est d'intégrer MAD-Muphy dans le modèle de climat global vénusien (IPSL-VGCM), il faut donc limiter le nombre de variables que le VGCM doit suivre dans le temps et l'espace (également appelé traceurs). La méthode des moments est déjà utilisée dans les GCM de Titan et de Mars et constitue un bon compromis entre la précision des résultats et le temps de calcul. MAD-Muphy est donc basé sur cette représentation pour une pression et une température dé nies pour une couche de l'atmosphère (ou 0D). La thèse présentée ici détaille le développement des expressions mathématiques des équations de la microphysique avec les moments, présente le nouveau modèle MAD-Muphy ainsi que les hypothèses qui ont été nécessaires pour son développement. Tout d'abord, nous déterminerons le temps caractéristique de chaque processus microphysique et nous étudierons leur comportement en 0D. Ensuite, nos résultats seront comparés avec ceux du modèle sectionné SALSA en 0D. / The conditions on the surface of Venus are infernal: temperature of more than 400 C, 90 times the Earth's atmospheric pressure in an atmosphere composed of 96 % of carbon dioxide. A distinctive characteristic of this planet is the 20 km thick opaque cloud layer, which enshrouds the planet. Clouds have a crucial role in radiative transfer, atmospheric dynamics, in the cycle of some chemical species like sulphur and more generally in the climate of Venus. Despite the numerous space missions devoted to this object since 1961, there are few in-situ measurements. The lower cloud layers are di cult to study by satellite, so there are still many questions about clouds: their properties and their radiative, dynamic and chemical impacts are poorly constrained. Predominantly composed of sulphuric acid solution, the particles are supposed to be spherical and liquid and compose the clouds that are vertically spread between approximately 50 and 70 km of altitude, surrounded by hazes between approximately 30 and 50 km and above 70 km. Based on observations the droplets have been classied into three modes according to their size and composition: modes 1 and 2 respectively for small (r = 0.2 μm) and medium particles (r = 1.0 μm), and a third mode that would contain the largest particles (r = 3.5 μm). The latter mode, which has been detected by the Pioneer Venus probe, remains uncertain in composition and existence, and is not taken into account in our study. To complete and better understand the observational data, a modal microphysical model, called MAD-Muphy (Modal Aerosol Dynamics with Microphysics), has been developed. The goal is to integrate MAD-Muphy into the venusian global climate model (IPSL-VGCM), so we must limit the number of variables that the VGCM must follow in time and space (also called tracers). The moment method is already used in the Titan and Mars GCMs and is a good compromise between the accuracy of the results and the computation time. MAD-Muphy is the refore based on this representation for a pressure and a temperature of one atmospheric layer (or 0D). The thesis presented here details the derivation of the mathematical expressions of the microphysical equations with moments, presents the new MAD-Muphy model as well as the hypotheses that were necessary for its development. We will first determine the characteristic timescale of each microphysical process and we will study their behaviour in 0D. Then, our results will be compared with those of the SALSA sectional model in 0D.

Page generated in 0.035 seconds