• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development of a Strontium-87 Ion Interferometer

Erickson, Christopher Joseph 14 December 2011 (has links) (PDF)
I present the construction of a low-velocity intense source (LVIS) of laser-cooled neutral strontium using permanent ring magnets. The LVIS consists of a magneto-optical trap from which cold strontium is extracted in a well-collimated beam. I also present the development and implementation of a full suite of low-noise, high-bandwidth laser control electronics including a microcontroller unit. This microcontroller remotely controls and monitors the current driver, temperature controller, and PID lock circuit for each diode laser simultaneously. The current driver output is accurate to within 2 micro-amps and repeatable to with a few nano-amps. The noise spectral density of the current driver hits a floor of 10^(-10) amps per root Hz at ~50 Hz and has a modulation bandwidth of ~50 MHz. The PID lock-circuit includes a scan-balancing option that we have used to scan an AR coated laser diode ~30 GHz mode-hop free. I describe the construction of an 80 mW frequency doubled 461 nm laser system using PPKTP for cooling and trapping neutral strontium in the LVIS. The LVIS, the electronics systems, and the 461 nm laser system represent major milestones on the way to producing a matter-wave interferometer using Sr-87 ions. The interferometer is based on an optical Raman transition between the hyperfine ground states of the Sr-87 ion. The ions will be produced by exciting the strontium LVIS beam to an auto-ionizing state in the continuum. In the interferometer two half-pi pulses of light and one pi pulse will be delivered to the ions to split and recombine their wave functions. I present calculations of the predicted sensitivity and a discussion of the possible applications. I present a method for locking a 407.8 nm laser to the 5s doublet S J=1/2 to 5p doublet P J=3/2 strontium ion transition in a neutral vapor. I present calculations for the necessary vacuum levels for the experiment and describe the preparation and assembly of the vacuum apparatus. The major vacuum system consists of two connected elastomer sealed chambers: one at 10^(-7) Torr and the other at 10^(-10) Torr separated by a region of low conductance. I present a Sr vapor cell constructed from standard CF fittings that allows the strontium to be heated to ~730 C, which can also be run as a thermal beam. I present a method for protecting the viewports on small-form alkali-earth vapor cells using lead or indium foil during the evaporation of oxide layers. Finally, I report on the current status of the experiment as well as detail future work on the apparatus.
2

Elevation Changes in Greenland over Two Decades from Cross-Platform LIDAR Analysis

Wheelock-Davis, Emily J. 08 August 2013 (has links)
No description available.
3

A Deep Learning Study on the Retrieval of Forest Parameters from Spaceborne Earth Observation Sensors

Carcereri, Daniel 25 July 2024 (has links)
The efficient and timely monitoring of forest dynamics is of paramount importance and requires accurate, high-resolution and time-tagged predictions at global scale. Despite numerous methodologies have been proposed in the literature, existing approaches often compromise on accuracy, resolution, temporal fidelity or coverage. To tackle these challenges and limitations, the main objective of this doctoral thesis is the investigation of the potential of artificial intelligence (AI) for the regression of bio-physical forest parameters from spaceborne Earth Observation (EO) data. This work explores for the first time the combined use of TanDEM-X single-pass interferometric products and convolutional neural networks for canopy height estimation at country scale. To achieve this, a novel deep learning framework is proposed, leveraging the capability of deep neural networks to effectively capture the complex spatial relationships between forest properties and satellite data, as well as ensuring the adaptability to different environmental conditions. The design and the understanding of the model is driven by explainable AI principles and by considerations on large-scale forest dynamics, with a great emphasis set on the challenges related to the variable acquisition geometry of the TanDEM-X mission, and by relying on the use of LVIS-derived LiDAR measurements as reference data. Moreover, several investigations are conducted on the adaptability of the developed framework for transferring knowledge to related domains, such as digital terrain model regression and above-ground biomass density estimation. Finally, the capability of the proposed approach to be extended to the use of other EO sensors is also evaluated, with a particular emphasis on the ESA Sentinel-1 and Sentinel-2 missions. The developed deep learning framework sets a solid groundwork for the generation of large-scale products of bio-physical forest parameters from spaceborne EO data. The approach achieves cutting-edge performance, significantly advancing the current state of forest assessment and monitoring technologies.
4

Towards meaningful and data-efficient learning : exploring GAN losses, improving few-shot benchmarks, and multimodal video captioning

Huang, Gabriel 09 1900 (has links)
Ces dernières années, le domaine de l’apprentissage profond a connu des progrès énormes dans des applications allant de la génération d’images, détection d’objets, modélisation du langage à la réponse aux questions visuelles. Les approches classiques telles que l’apprentissage supervisé nécessitent de grandes quantités de données étiquetées et spécifiques à la tâches. Cependant, celles-ci sont parfois coûteuses, peu pratiques, ou trop longues à collecter. La modélisation efficace en données, qui comprend des techniques comme l’apprentissage few-shot (à partir de peu d’exemples) et l’apprentissage self-supervised (auto-supervisé), tentent de remédier au manque de données spécifiques à la tâche en exploitant de grandes quantités de données plus “générales”. Les progrès de l’apprentissage profond, et en particulier de l’apprentissage few-shot, s’appuient sur les benchmarks (suites d’évaluation), les métriques d’évaluation et les jeux de données, car ceux-ci sont utilisés pour tester et départager différentes méthodes sur des tâches précises, et identifier l’état de l’art. Cependant, du fait qu’il s’agit de versions idéalisées de la tâche à résoudre, les benchmarks sont rarement équivalents à la tâche originelle, et peuvent avoir plusieurs limitations qui entravent leur rôle de sélection des directions de recherche les plus prometteuses. De plus, la définition de métriques d’évaluation pertinentes peut être difficile, en particulier dans le cas de sorties structurées et en haute dimension, telles que des images, de l’audio, de la parole ou encore du texte. Cette thèse discute des limites et des perspectives des benchmarks existants, des fonctions de coût (training losses) et des métriques d’évaluation (evaluation metrics), en mettant l’accent sur la modélisation générative - les Réseaux Antagonistes Génératifs (GANs) en particulier - et la modélisation efficace des données, qui comprend l’apprentissage few-shot et self-supervised. La première contribution est une discussion de la tâche de modélisation générative, suivie d’une exploration des propriétés théoriques et empiriques des fonctions de coût des GANs. La deuxième contribution est une discussion sur la limitation des few-shot classification benchmarks, certains ne nécessitant pas de généralisation à de nouvelles sémantiques de classe pour être résolus, et la proposition d’une méthode de base pour les résoudre sans étiquettes en phase de testing. La troisième contribution est une revue sur les méthodes few-shot et self-supervised de détection d’objets , qui souligne les limites et directions de recherche prometteuses. Enfin, la quatrième contribution est une méthode efficace en données pour la description de vidéo qui exploite des jeux de données texte et vidéo non supervisés. / In recent years, the field of deep learning has seen tremendous progress for applications ranging from image generation, object detection, language modeling, to visual question answering. Classic approaches such as supervised learning require large amounts of task-specific and labeled data, which may be too expensive, time-consuming, or impractical to collect. Data-efficient methods, such as few-shot and self-supervised learning, attempt to deal with the limited availability of task-specific data by leveraging large amounts of general data. Progress in deep learning, and in particular, few-shot learning, is largely driven by the relevant benchmarks, evaluation metrics, and datasets. They are used to test and compare different methods on a given task, and determine the state-of-the-art. However, due to being idealized versions of the task to solve, benchmarks are rarely equivalent to the original task, and can have several limitations which hinder their role of identifying the most promising research directions. Moreover, defining meaningful evaluation metrics can be challenging, especially in the case of high-dimensional and structured outputs, such as images, audio, speech, or text. This thesis discusses the limitations and perspectives of existing benchmarks, training losses, and evaluation metrics, with a focus on generative modeling—Generative Adversarial Networks (GANs) in particular—and data-efficient modeling, which includes few-shot and self-supervised learning. The first contribution is a discussion of the generative modeling task, followed by an exploration of theoretical and empirical properties of the GAN loss. The second contribution is a discussion of a limitation of few-shot classification benchmarks, which is that they may not require class semantic generalization to be solved, and the proposal of a baseline method for solving them without test-time labels. The third contribution is a survey of few-shot and self-supervised object detection, which points out the limitations and promising future research for the field. Finally, the fourth contribution is a data-efficient method for video captioning, which leverages unsupervised text and video datasets, and explores several multimodal pretraining strategies.

Page generated in 0.0272 seconds