• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 16
  • 12
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 120
  • 47
  • 30
  • 29
  • 24
  • 23
  • 19
  • 17
  • 16
  • 15
  • 14
  • 14
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Reinforcement Learning Application in Wavefront Sensorless Adaptive Optics System

Zou, Runnan 13 February 2024 (has links)
With the increasing exploration of space and widespread use of communication tools worldwide, near-ground satellite communication has emerged as a promising tool in various fields such as aerospace, military, and microscopy. However, the presence of air and water in the atmosphere causes distortion in the light signal, and thus, it is essential for the ground base to retrieve the original signal from the distorted light signal sent from the satellite. Traditionally, Shack-Hartmann sensors or charge-coupled devices are integrated in the system for distortion measurement. In our pursuit of a cost-effective system establishment with optimal performance and enhanced response speed, sensors and charge-coupled devices have been replaced by a photodiode and a single mode fiber in this project. Since the system has limited observation capability, it requires a powerful controller for optimal performance. To address this issue, we have implemented an off-policy reinforcement learning framework, the soft actor-critic, in the adaptive optics system controller. This integration results in a model-free online controller capable of mitigating wavefront distortion. The soft actor-critic controller processes the acquired data matrix from the photodiode and generates a two-dimensional array control signal for the deformable mirror, which corrects the wavefront distortion induced by the atmosphere, and refocusing the signal to maximize the incoming power. The parameters of the soft actor-critic controller have been tuned to achieve optimal system performance. Simulations have been conducted to compare the performance of the proposed controller with respect to wavefront sensor-based methods. The training and verification of the proposed controller have been conducted in both static and semi-dynamic atmospheres, under different atmospheric conditions. Simulation results demonstrate that, in severe atmospheric conditions, the adaptive optics system with the soft actor-critic controller achieves more than 55% and 30% Strehl ratio on average in static and semi-dynamic atmospheres, respectively. Furthermore, the distorted wavefront's power can be concentrated at the center of the focal plane and the fiber, providing an improved signal.
22

Exploring Performance Portability for Accelerators via High-level Parallel Patterns

Hou, Kaixi 27 August 2018 (has links)
Nowadays, parallel accelerators have become prominent and ubiquitous, e.g., multi-core CPUs, many-core GPUs (Graphics Processing Units) and Intel Xeon Phi. The performance gains from them can be as high as many orders of magnitude, attracting extensive interest from many scientific domains. However, the gains are closely followed by two main problems: (1) A complete redesign of existing codes might be required if a new parallel platform is used, leading to a nightmare for developers. (2) Parallel codes that execute efficiently on one platform might be either inefficient or even non-executable for another platform, causing portability issues. To handle these problems, in this dissertation, we propose a general approach using parallel patterns, an effective and abstracted layer to ease the generating efficient parallel codes for given algorithms and across architectures. From algorithms to parallel patterns, we exploit the domain expertise to analyze the computational and communication patterns in the core computations and represent them in DSL (Domain Specific Language) or algorithmic skeletons. This preserves the essential information, such as data dependencies, types, etc., for subsequent parallelization and optimization. From parallel patterns to actual codes, we use a series of automation frameworks and transformations to determine which levels of parallelism can be used, what optimal instruction sequences are, how the implementation change to match different architectures, etc. Experiments show that our approaches by investigating a couple of important computational kernels, including sort (and segmented sort), sequence alignment, stencils, etc., across various parallel platforms (CPUs, GPUs, Intel Xeon Phi). / Ph. D. / Nowadays, parallel accelerators have become prominent and ubiquitous, e.g., multi-core CPUs, many-core GPUs (Graphics Processing Units) and Intel Xeon Phi. The performance gains from them can be as high as many orders of magnitude, attracting extensive interest from many scientific domains. However, the gains are closely followed by two main problems: (1) A complete redesign of existing codes might be required if a new parallel platform is used, leading to a nightmare for developers. (2) Parallel codes that execute efficiently on one platform might be either inefficient or even non-executable for another platform, causing portability issues. To handle these problems, in this dissertation, we propose a general approach using parallel patterns, an effective and abstracted layer to ease the generating efficient parallel codes for given algorithms and across architectures. From algorithms to parallel patterns, we exploit the domain expertise to analyze the computational and communication patterns in the core computations and represent them in DSL (Domain Specific Language) or algorithmic skeletons. This preserves the essential information, such as data dependencies, types, etc., for subsequent parallelization and optimization. From parallel patterns to actual codes, we use a series of automation frameworks and transformations to determine which levels of parallelism can be used, what optimal instruction sequences are, how the implementation change to match different architectures, etc. Experiments show that our approaches by investigating a couple of important computational kernels, including sort (and segmented sort), sequence alignment, stencils, etc., across various parallel platforms (CPUs, GPUs, Intel Xeon Phi).
23

Measurement and Analysis of Wavefront Deviations and Distortions by Freeform Optical See-through Head Mounted Displays

Kuhn, Jason William January 2016 (has links)
A head-mounted-display with an optical combiner may introduce significant amount of distortion to the real world scene. The ability to accurately model the effects of both 2-dimensional and 3-dimensional distortion introduced by thick optical elements has many uses in the development of head-mounted display systems and applications. For instance, the computer rendering system must be able to accurately model this distortion and provide accurate compensation in the virtual path in order to provide a seamless overlay between the virtual and real world scenes. In this paper, we present a ray tracing method that determines the ray shifts and deviations introduced by a thick optical element giving us the ability to generate correct computation models for rendering a virtual object in 3D space with the appropriate amount of distortion. We also demonstrate how a Hartmann wavefront sensor approach can be used to evaluate the manufacturing errors in a freeform optical element to better predict wavefront distortion. A classic Hartmann mask is used as an inexpensive and easily manufacturable solution for accurate wavefront measurements. This paper further suggests two techniques; by scanning the Hartmann mask laterally to obtain dense sampling and by increasing the view screen distance to the testing aperture, for improving the slope measurement accuracy and resolution. The paper quantifies the improvements of these techniques on measuring both the high and low sloped wavefronts often seen in freeform optical-see-through head-mounted displays. By comparing the measured wavefront to theoretical wavefronts constructed with ray tracing software, we determine the sources of error within the freeform prism. We also present a testing setup capable of measuring off-axis viewing angles to replicate how the system would perform when worn by its user.
24

Investigation of alternative pyramid wavefront sensors

van Kooten, Maaike 20 July 2016 (has links)
A pyramid wavefront sensor (PWFS) bench has been setup at the National Research Council-Herzberg (Victoria, Canada) to investigate: the feasibility of a lenslet based PWFS and a double roof prism based PWFS as alternatives to a classical PWFS, as well as to test the proposed methodology for pyramid wavefront sensing to be used in NFIRAOS for the Thirty Meter Telescope (TMT). Traditional PWFS require shallow angles and strict apex tolerances, making them difficult to manufacture. Lenslet arrays, on the other hand, are common optical components that can be made to the desired specifications, thus making them readily available. A double roof prism pyramid, also readily available, has been shown to optically equivalent by optical designers. Characterizing these alternative pyramids, and understanding how they differ from a traditional pyramid will allow for the PWFS to become more widely used, especially in the laboratory setting. In this work, the response of the SUSS microOptics 300-4.7 array and two ios Optics roof prisms are compared to a double PWFS as well as an idealized PWFS. The evolution of the modulation and dithering hardware, the system control configuration, and the relationship between this system and NFIRAOS are also explored. / Graduate
25

Transfer of learning from traditional optics to wavefront aberrometry

McBride, Dyan L. January 1900 (has links)
Doctor of Philosophy / Department of Physics / Dean A. Zollman / This research presents an investigation of how students dynamically construct knowledge in a new situation. In particular, this work focuses on the contexts of light and optics, and examines the dynamic construction of an understanding of wavefront aberrometry. The study began with clinical interviews designed to elicit students’ prior knowledge about light, basic optics, and vision; the data were analyzed phenomenographically to obtain student models of understanding and examine the possible model variations. The results indicate that students have a significant number of resources in this subject area, though some are incomplete or less useful than others. In subsequent phases, many learning and teaching interviews were conducted to design and test scaffolding procedures that could be of use to students as they constructed their understanding of the given phenomenon. Throughout this work, student responses were analyzed in terms of the resources that were being used through the knowledge construction process. Finally, a modified analysis method is presented and utilized for quantifying what types of concepts students use while constructing their understanding, and how they are able to link varying types of concepts together. Significant implications extend beyond the single context of wavefront aberrometry. Each distinct analysis technique provides further insight to the ways in which students learn across contexts and the ways in which we can scaffold their learning to improve curriculum and instruction.
26

Simulation of anisotropic wave propagation in Vertical Seismic Profiles

Durussel, Vincent Bernard 30 September 2004 (has links)
The influence of elastic anisotropy on seismic wave propagation is often neglected for the sake of simplicity. However, ignoring anisotropy may lead to significant errors in the processing of seismic data and ultimately in a poor image of the subsurface. This is especially true in wide-aperture Vertical Seismic Profiles where waves travel both vertically and horizontally. Anisotropy has been neglected in wavefront construction methods of seismic ray-tracing until Gibson (2000), who showed they are powerful tools to simulate seismic wave propagation in three-dimensional anisotropic subsurface models. The code is currently under development using a C++ object oriented programming approach because it provides high flexibility in the design of new components and facilitates debugging and maintenance of a complex algorithm. So far, the code was used to simulate propagation in homogeneous or simple heterogeneous anisotropic velocity models mainly designed for testing purposes. In particular, it has never been applied to simulate a field dataset. We propose here an analytical method involving little algebra and that allows the design of realistic heterogeneous anisotropic models using the C++ object oriented programming approach. The new model class can model smooth multi-layered subsurface with gradients or models with many dip variations. It has been used to model first arrival times of a wide-aperture VSP dataset from the Gulf of Mexico to estimate the amount of anisotropy. The proposed velocity model is transversely isotropic. The anisotropy is constant throughout the model and is defined via Thomsen's parameters. Values in the final model are epsilon = 0.055 and delta = -0.115. The model is compatible with the a priori knowledge of the local geology and reduces the RMS average time difference between measured and computed travel times by 51% in comparison to the initial isotropic model. These values are realistic and are similar to other measurements of anisotropy in the Gulf of Mexico.
27

Decision Making of Mobile Robot in the Presence of Risk on Its Surroundings

Huh, Sung 2011 December 1900 (has links)
Mobile robots are used on many areas and its demand on extreme terrain, hazardous area, or life-threatening place is increasing to reduce the loss of life. A good decision making capability is essential for successful navigation of autonomous robot and it affect finding the shortest or optimal path within given condition. The wavefront algorithm is simple to apply, yet yield an optimal path for a robot to follow in many different configurations. Although the path created using wavefront algorithm is an optimal in the sense that every node has the same cost, the result is not the best result in global perspective because of the algorithm is inconsiderate on the surrounding condition. To solve this issue and create the best result on global perspective, risk factor analysis method was implemented on the wavefront algorithm to improve the performance. In this work, the relationship between the wavefront algorithm and dynamic programming will be explained to show that the wavefront algorithm obeys the principle of optimality. The simulation result displays better performance on safety, while keeping the travelling distance minimum, if the risk factor is used on the wavefront algorithm and the robot on actual test behave accordingly. This work will contribute on decision making of mobile robot using risk factor method to create a most desirable and safe path. In addition to that, it will demonstrate how the risk factor method can be applied to the mobile robot navigation.
28

A Run-Time Loop Parallelization Technique on Shared-Memory Multiprocessor Systems

Wu, Chi-Fan 06 July 2000 (has links)
High performance computing power is important for the current advanced calculations of scientific applications. A multiprocessor system obtains its high performance from the fact that some computations can proceed in parallel. A parallelizing compiler can take a sequential program as input and automatically translate it into parallel form for the target multiprocessor system. But when loops with arrays of irregular, nonlinear or dynamic access patterns, no any current parallelizing compiler can determine whether data dependences exist at compile-time. Thus a run-time parallel algorithm will be utilized to determine dependences and extract the potential parallelism of loops. In this thesis, we propose an efficient run-time parallelization technique to compute a proper parallel execution schedule in those loops. This new method first detects immediate predecessor iterations of each loop iteration and constructs an immediate predecessor table, then efficiently schedules the whole loop iterations into wavefronts for parallel execution. According to either theoretical analysis or experimental results, our new run-time parallelization technique reveals high speedup and low processing overhead. Furthermore, this new technique is appropriate to implement on multiprocessor systems due to the characteristics of high scalability.
29

Simulation of anisotropic wave propagation in Vertical Seismic Profiles

Durussel, Vincent Bernard 30 September 2004 (has links)
The influence of elastic anisotropy on seismic wave propagation is often neglected for the sake of simplicity. However, ignoring anisotropy may lead to significant errors in the processing of seismic data and ultimately in a poor image of the subsurface. This is especially true in wide-aperture Vertical Seismic Profiles where waves travel both vertically and horizontally. Anisotropy has been neglected in wavefront construction methods of seismic ray-tracing until Gibson (2000), who showed they are powerful tools to simulate seismic wave propagation in three-dimensional anisotropic subsurface models. The code is currently under development using a C++ object oriented programming approach because it provides high flexibility in the design of new components and facilitates debugging and maintenance of a complex algorithm. So far, the code was used to simulate propagation in homogeneous or simple heterogeneous anisotropic velocity models mainly designed for testing purposes. In particular, it has never been applied to simulate a field dataset. We propose here an analytical method involving little algebra and that allows the design of realistic heterogeneous anisotropic models using the C++ object oriented programming approach. The new model class can model smooth multi-layered subsurface with gradients or models with many dip variations. It has been used to model first arrival times of a wide-aperture VSP dataset from the Gulf of Mexico to estimate the amount of anisotropy. The proposed velocity model is transversely isotropic. The anisotropy is constant throughout the model and is defined via Thomsen's parameters. Values in the final model are epsilon = 0.055 and delta = -0.115. The model is compatible with the a priori knowledge of the local geology and reduces the RMS average time difference between measured and computed travel times by 51% in comparison to the initial isotropic model. These values are realistic and are similar to other measurements of anisotropy in the Gulf of Mexico.
30

Development and Verification of the non-linear Curvature Wavefront Sensor

Mateen, Mala January 2015 (has links)
Adaptive optics (AO) systems have become an essential part of ground-based telescopes and enable diffraction-limited imaging at near-IR and mid-IR wavelengths. For several key science applications the required wavefront quality is higher than what current systems can deliver. For instance obtaining high quality diffraction-limited images at visible wavelengths requires residual wavefront errors to be well below 100 nm RMS. High contrast imaging of exoplanets and disks around nearby stars requires high accuracy control of low-order modes that dominate atmospheric turbulence and scatter light at small angles where exoplanets are likely to be found. Imaging planets using a high contrast corona graphic camera, as is the case for the Spectro-Polarimetric High-contrast Exoplanet Research (SPHERE) on the Very Large Telescope (VLT), and the Gemini Planet Imager (GPI), requires even greater wavefront control accuracy. My dissertation develops a highly sensitive non-linear Curvature Wavefront Sensor (nlCWFS) that can deliver diffraction-limited (λ/D) images, in the visible, by approaching the theoretical sensitivity limit imposed by fundamental physics. The nlCWFS is derived from the successful curvature wavefront sensing concept but uses a non-linear reconstructor in order to maintain sensitivity to low spatial frequencies. The nlCWFS sensitivity makes it optimal for extreme AO and visible AO systems because it utilizes the full spatial coherence of the pupil plane as opposed to conventional sensors such as the Shack-Hartmann Wavefront Sensor (SHWFS) which operate at the atmospheric seeing limit (λ/r₀). The difference is equivalent to a gain of (D/r₀)² in sensitivity, for the lowest order mode, which translates to the nlCWFS requiring that many fewer photons. When background limited the nlCWFS sensitivity scales as D⁴, a combination of D² gain due to the diffraction limit and D² gain due to telescope's collecting power. Whereas conventional wavefront sensors only benefit from the D² gain due to the telescope's collecting power. For a 6.5 m telescope, at 0.5 μm, and seeing of 0.5", the nlCWFS can deliver for low order modes the same wavefront measurement accuracy as the SHWFS with 1000 times fewer photons. This is especially significant for upcoming extremely large telescopes such as the Giant Magellan Telescope (GMT) which has a 25.4 m aperture, the Thirty Meter Telescope (TMT) and the European Extremely Large Telescope (E-ELT) which has a 39 m aperture.

Page generated in 0.0439 seconds