• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1102
  • 350
  • 139
  • 134
  • 126
  • 87
  • 42
  • 39
  • 29
  • 24
  • 11
  • 11
  • 10
  • 7
  • 7
  • Tagged with
  • 2538
  • 493
  • 331
  • 286
  • 234
  • 197
  • 169
  • 159
  • 158
  • 151
  • 145
  • 135
  • 129
  • 128
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Path Integral Approach to Levy Flights and Hindered Rotations

Janakiraman, Deepika January 2013 (has links) (PDF)
Path integral approaches have been widely used for long in both quantum mechanics as well as statistical mechanics. In addition to being a tool for obtaining the probability distributions of interest(wave functions in the case of quantum mechanics),these methods are very instructive and offer great insights into the problem. In this thesis, path integrals are extensively employed to study some very interesting problems in both equilibrium and non-equilibrium statistical mechanics. In the non-equilibrium regime, we have studied, using a path integral approach, a very interesting class of anomalous diffusion, viz. the L´evy flights. In equilibrium statistical mechanics, we have evaluated the partition function for a class of molecules referred to as the hindered rotors which have a barrier for internal rotation. Also, we have evaluated the exact quantum statistical mechanical propagator for a harmonic potential with a time-dependent force constant, valid under certain conditions. Diffusion processes have attracted a great amount of scientific attention because of their presence in a wide range of phenomena. Brownian motion is the most widely known class of diffusion which is usually driven by thermal noise. However ,there are other classes of diffusion which cannot be classified as Brownian motion and therefore, fall under the category of Anomalous diffusion. As the name suggests, the properties of this class of diffusion are very different from those for usual Brownian motion. We are interested in a particular class of anomalous diffusion referred to as L´evy flights in which the step sizes taken by the particle during the random walk are obtained from what is known as a L´evy distribution. The diverging mean square displacement is a very typical feature for L´evy flights as opposed to a finite mean square displacement with a linear dependence on time in the case of Brownian motion. L´evy distributions are characterized by an index α where 0 <α ≤ 2. When α =2, the distribution becomes a Gaussian and when α=1, it reduces to a Cauchy/Lorentzian distribution. In the overdamped limit of friction, the probability density or the propagator associated with L´evy flights can be described by a position space fractional Fokker-Planck equation(FFPE)[1–3]. Jespersen et al. [4]have solved the FFPE in the Fourier domain to obtain the propagator for free L´evy flight(absence of an external potential) and L´evy flights in linear and harmonic potentials. We use a path integral technique to study L´evy flights. L´evy distributions rarely have a compact analytical expression in the position space. However, their Fourier transformations are rather simple and are given by e−D │p│α where D determines the width of the distribution. Due to the absence of a simple analytical expression, attempts in the past to study L´evy flights using path integrals in the position space [5, 6] have not been very successful. In our approach, we have tried to make use of the elegant representation of the L´evy distribution in the Fourier space and therefore, we write the propagator in terms of a two-dimensional path integral –one over paths in the position space(x)and the other over paths in the Fourier space(p). We shall refer to this space as the ‘phase space’. Such a representation is similar to the Hamiltonian path integral of quantum mechanics which was introduced by Garrod[7]. If we try to perform the path integral over Fourier variables first, then what remains is the usual position space path integral for L´evy flights which is rather difficult to solve. Instead, we perform the position space path integral first which results in expressions which are rather simple to handle. Using this approach, we have obtained the propagators for free L´evy flight and L´evy flights in linear and harmonic potentials in the over damped limit [8]. The results obtained by this method are in complete agreement with those obtained by Jesepersen et al. [4]. In addition to these results, we were also able to obtain the exact propagator for L´evy flights in a harmonic potential with a time-dependent force constant which has not been reported in the literature. Another interesting problem that we have considered in the over damped limit is to obtain the probability distribution for the area under the trajectory of a L´evy particle. The distributions, again, were obtained for free L´evy flight and for L´evy flights subjected to linear and harmonic potentials. In the harmonic potential, we have considered situations where the force constant is time-dependent as well as time-independent. Like in the case of the over damped limit, the probability distribution for L´evy flights in the under damped limit of friction can also be described using a fractional Fokker-Planck equation, although in the full phase space. However, this has not yet been solved for any general value of α to obtain the complete propagator in terms of both position and velocity. Using our path integral approach, the exact full phase space propagators have been obtained for all values of α for free L´evy flights as well as in the presence of linear and harmonic potentials[8]. The results that we obtain are all exact when the potential is at the most harmonic. If the potential is higher than harmonic, like the cubic potential, we have used a semi classical evaluation where, we extremize the action using an optimal path and further, account for fluctuations around this optimal path. Such potentials are very useful in describing the problem of escape of a particle over a barrier. The barrier crossing problem is very extensively studied for Brownian motion (Kramers problem) and the associated rate constant has been calculated in a variety of methods, including the path integral approach. We are interested in its L´evy analogue where we consider the escape of a particle driven by a L´evy noise over a barrier. On extremizing the action which depends both on phase space variables, we arrived at optimal paths in both the position space as well as the space of the conjugate variable, p. The paths form an infinite hierarchy of instant on paths, all of which have to be accounted for in order to obtain the correct rate constant. Care has to be taken while accounting for fluctuations around the optimal path since these fluctuations should be independent of the time-translational mode of the instant on paths. We arrived at an ‘orthogonalization’ scheme to perform the same. Our procedure is valid in the limit when the barrier height is large(or when the diffusion constant is very small), which would ensure that there is small but a steady flux of particles over the barrier even at very large times. Unlike the traditional Kramers rate expression, the rate constant for barrier crossing assisted by L´evy noise does not have an exponential dependence on the barrier height. The rate constant for wide range of α, other than for those very close to α = 2, are proportional to Dμ where, µ ≈ 1 and D is the diffusion constant. These observations are consistent with the simulation results obtained by Chechkin et al. [9]. In addition, our approach when applied to Brownian motion, gives the correct dependence on D. In equilibrium statistical mechanics we have considered two problems. In the first one, we have evaluated the imaginary time propagator for a harmonic oscillator with a time-dependent force constant(ω2(t))exactly, when ω2(t) is of the form λ2(t) - λ˙(t)where λ(t) is any arbitrary function of t. We have made use of Hamiltonian path integrals for this. The second problem that we considered was the evaluation of the partition function for hindered rotors. Hindered rotors are molecules which have a barrier for internal rotation. The molecule behaves like free rotor when the barrier is very small in comparison with the thermal energy, and when the barrier is very high compared to thermal energy, it behaves like a harmonic oscillator. Many methods have been developed in order to obtain the partition function for a hindered rotor. However, most of them are some what ad-hoc since they interpolate between free-rotor and the harmonic oscillator limits. We have obtained the approximate partition function by writing it as the trace of the density matrix and performing a harmonic approximation around each point of the potential[10]. The density matrix for a harmonic potential is in turn obtained from a path integral approach[11]. The results that we obtain using this method are very close to the exact results for the problem obtained numerically. Also, we have devised a proper method to take the indistinguishability of particles into account in internal rotation which becomes very crucial while calculating the partition function at low temperatures.
232

Incorporating the effect of delay variability in path based delay testing

Tayade, Rajeshwary G. 19 October 2009 (has links)
Delay variability poses a formidable challenge in both design and test of nanometer circuits. While process parameter variability is increasing with technology scaling, as circuits are becoming more complex, the dynamic or vector dependent variability is also increasing steadily. In this research, we develop solutions to incorporate the effect of delay variability in delay testing. We focus on two different applications of delay testing. In the first case, delay testing is used for testing the timing performance of a circuit using path based fault models. We show that if dynamic delay variability is not accounted for during the path selection phase, then it can result in targeting a wrong set of paths for test. We have developed efficient techniques to model the effect of two different dynamic effects namely multiple-input switching noise and coupling noise. The basic strategy to incorporate the effect of dynamic delay variability is to estimate the maximum vector delay of a path without being too pessimistic. In the second case, the objective was to increase the defect coverage of reliability defects in the presence of process variations. Such defects cause very small delay changes and hence can easily escape regular tests. We develop a circuit that facilitates accurate control over the capture edge and thus enable faster than at-speed testing. We further develop an efficient path selection algorithm that can select a path that detects the smallest detectable defect at any node in the presence of process variations. / text
233

Coordinated motion control of multiple underactuated autonomous underwater vehicles / Contrôle coordonné de flottille de véhicules sous-marins sous-actionnés autonomes (AUVs)

Xiang, Xianbo 24 February 2011 (has links)
Cette thèse traite de la question du contrôle du mouvement d'engins non-holonomes et sous-actionnés évoluant de manière coordonnée et autonome. Les différentes approches considérées sont le suivi de trajectoire (Trajectory Tracking TT) et le suivi de chemin (path following PF). Une nouvelle méthode de contrôle est proposée. Dénommée Path-Tracking (PT), elle permet de cumuler les avantages de chacune des deux précédentes méthodes, permettant de cumuler la souplesse de la convergence induite par le suivi de chemin avec le respect des contraintes temporelles du suivi de trajectoire. L'étude et la réalisation de la commande démarre avec l'étude du cas du robot nonholonome de type Unicycle' et se base sur les principes de Lyapunov' et de Backstepping'. Ces premiers résultats sont ensuite étendus au cas d'un véhicule sous-marin sous-actionné de type AUV (Autonomous Underwater Vehicle'), en analysant les similarités cinématiques entre ces deux types de véhicules. De plus, il est montré la nécessité de prendre en compte les propriétés dynamiques du système de type AUV, et la condition de Stern dominancy' est établie de façon à garantir que le problème est bien posé et ainsi que la commande soit aisément calculable. Dans la cas d'un système marin sur-actionné, qui peut ainsi effectuer des tâches de navigation au long cours et de positionnement désiré (Station keeping'), une commande hybride est proposée. Enfin, la question du contrôle coordonné d'une formation d'engins marin est abordée. Les colutions de commande pour les taches de suivi de chemin coordonné (coordinated path following') et de coordinated path tracking' sont proposées. Les principes du leader-follower' et la méthode des structures virtuelles sont ainsi traitées dans un cadre de contrôle centralisé, et le cas décentralisé est traité en utilisant certains principes de théorie des graphes. / In this dissertation, the problems of motion control of underactuated autonomous vehicles are addressed,namely trajectory tracking (TT), path following (PF), and novelly proposed path tracking whichblending the PF and TT together in order to achieve smooth spatial convergence and tight temporalperformance as well.The control design is firstly started from the benchmark case of nonholonomic unicycle-type vehicles,where the Lyapunov-based design and backstepping technique are employed, and then it is extendedto the underactuated AUVs based on the similarity between the control inputs of two kinds of vehicles.Moreover, dealing with acceleration of side-slip angle is highlighted and stern-dominant property of AUVsis standing out in order to achieve well-posed control computation. Transitions of motion control fromunderactuated to fully actuated AUVs are also proposed.Finally, coordinated formation control of multiple autonomous vehicles are addressed in two-folds,including coordinated paths following and coordinated paths tracking, based on leader-follower andvirtual structure method respectively under the centralized control framework, and then solved underdecentralized control framework by resorting to algebraic graph theory.
234

Self-tuning dynamic voltage scaling techniques for processor design

Park, Junyoung 30 January 2014 (has links)
The Dynamic Voltage Scaling (DVS) technique has proven to be ideal in regard to balancing performance and energy consumption of a processor since it allows for almost cubic reduction in dynamic power consumption with only a nearly linear reduction in performance. Due to its virtue, the DVS technique has been used for the two main purposes: energy-saving and temperature reduction. However, recently, a Dynamic Voltage Scaled (DVS) processor has lost its appeal as process technology advances due to the increasing Process, Voltage and Temperature (PVT) variations. In order to make a processor tolerant to the increasing uncertainties caused by such variations, processor designers have used more timing margins. Therefore, in a modern-day DVS processor, reducing voltage requires comparatively more performance degradation when compared to its predecessors. For this reason, this technique has a lot of room for improvement for the following facts. (a) From an energy-saving viewpoint, excessive margins to account for the worst-case operating conditions in a DVS processor can be exploited because they are rarely used during run-time. (b) From a temperature reduction point of view, accurate prediction of the optimal performance point in a DVS processor can increase its performance. In this dissertation, we propose four performance improvement ideas from two different uses of the DVS technique. In regard to the DVS technique for energy-saving, in this dissertation, we introduce three different types of margin reduction (or margin decision) techniques. First, we introduce a new indirect Critical Path Monitor (CPM) to make a conventional DVS processor adaptive to its given environment. Our CPM is composed of several Slope Generators, each of which generates similar voltage scaling slopes to those of potential critical paths under a process corner. Each CPR in the Slope Generator tracks the delays of potential critical paths with minimum difference at any condition in a certain voltage range. The CPRs in the same Slope Generator are connected to a multiplexer and one of them is selected according to a current voltage level. Calibration steps are done by using conventional speed-binning process with clock duty-cycle modulation. Second, we propose a new direct CPM that is based on a non-speculative pre-sampling technique. A processor that is based on this technique predicts timing errors in the actual critical paths and undertakes preventive steps in order to avoid the timing errors in the event that the timing margins fall below a critical level. Unlike direct CPM that uses circuit-level speculative operation, although the shadow latch can have timing error, the main Flip-Flop (FF) of our direct CPM never fails, guaranteeing always-correct operation of the processor. Our non-speculative CPM is more suitable for high-performance processor designs than the speculative CPM in that it does not require original design modification and has lower power overhead. Third, we introduce a novel method that determines the most accurate margin that is based on the conventional binning process. By reusing the hold-scan FFs in a processor, we reduce design complexity, minimize hardware overhead and increase error detecting accuracy. Running workloads on the processor with Stop-Go clock gating allows us to find which paths have timing errors during the speed binning steps at various, fixed temperature levels. From this timing error information, we can determine the different maximum frequencies for diverse operating conditions. This method has high degree of accuracy without having a large overhead. In regard to the DVS technique for temperature reduction, we introduce a run-time temperature monitoring scheme that predicts the optimal performance point in a DVS processor with high accuracy. In order to increase the accuracy of the optimal performance point prediction, this technique monitors the thermal stress of a processor during run-time and uses several Look-Up Tables (LUTs) for different process corners. The monitoring is performed while applying Stop-Go clock gating, and the average EN value is calculated at the end of the monitoring time. Prediction of the optimal performance point is made using the average EN value and one of the LUTs that corresponds to the process corner under which the processor was manufactured. The simulation results show that we can achieve maximum processor performance while keeping the processor temperature within threshold temperature. / text
235

Συντομότερες Διαδρομές Δύο Κριτηρίων: Αλγόριθμοι και Πειραματική Αξιολόγιση / Biobjective Shortest Path Problems: Algorithms and Experimental Study

Τσαγγούρης, Γεώργιος 16 May 2007 (has links)
Το πρόβλημα εύρεσης συντομότερης διαδρομής είναι ένα από τα πιο θεμελιώδη προβλήματα μονοκριτηριακής βελτιστοποίησης σε δίκτυα. Σε πολλές εφαρμογές ωστόσο, μας ενδιαφέρουν περισσότερα από ένα κριτήρια προς βελτιστοποίηση. Για παράδειγμα, στην δρομολόγηση σε ένα οδικό δίκτυο με διόδια, μας ενδιαφέρει ταυτόχρονα η ελαχιστοποίηση του χρόνου και του κόστους σε χρήματα. Παρόμοια παραδείγματα βρίσκουμε και στον χώρο των δικτύων τηλεπικοινωνιών, όπου εξετάζονται κριτήρια όπως η καθυστέρηση, η πιθανότητα λάθους, ο αριθμός συνδέσμων και άλλα. Σε αυτές οι περιπτώσεις η ``καλύτερη\\\\ / The shortest path problem is perhaps the most fundamental single objective optimization problem in networks. In many applications however we are interested in more than two objectives. For example, when routing in a network with tolls, we are interested in minimizing both the time and the cost. Similar examples can be also found in communication networks where the criteria under investigation are the delay, the fault probability, the number of hops and other. In such cases the \\\\
236

Practical Implementations Of The Active Set Method For Support Vector Machine Training With Semi-definite Kernels

Sentelle, Christopher 01 January 2014 (has links)
The Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when one considers that re-training is often necessary at several values of the models regularization parameter, C, as well as associated kernel parameters. The active set method is suitable for solving SVM problem and is in general ideal when the Hessian is dense and the solution is sparse–the case for the `1-loss SVM formulation. There has recently been renewed interest in the active set method as a technique for exploring the entire SVM regularization path, which has been shown to solve the SVM solution at all points along the regularization path (all values of C) in not much more time than it takes, on average, to perform training at a single value of C with traditional methods. Unfortunately, the majority of active set implementations used for SVM training require positive definite kernels, and those implementations that do allow semi-definite kernels tend to be complex and can exhibit instability and, worse, lack of convergence. This severely limits applicability since it precludes the use of the linear kernel, can be an issue when duplicate data points exist, and doesn’t allow use of low-rank kernel approximations to improve tractability for large datasets. The difficulty, in the case of a semi-definite kernel, arises when a particular active set results in a singular KKT matrix (or the equality-constrained problem formed using the active set is semidefinite). Typically this is handled by explicitly detecting the rank of the KKT matrix. Unfortunately, this adds significant complexity to the implementation; and, if care is not taken, numerical iii instability, or worse, failure to converge can result. This research shows that the singular KKT system can be avoided altogether with simple modifications to the active set method. The result is a practical, easy to implement active set method that does not need to explicitly detect the rank of the KKT matrix nor modify factorization or solution methods based upon the rank. Methods are given for both conventional SVM training as well as for computing the regularization path that are simple and numerically stable. First, an efficient revised simplex method is efficiently implemented for SVM training (SVM-RSQP) with semi-definite kernels and shown to out-perform competing active set implementations for SVM training in terms of training time as well as shown to perform on-par with state-of-the-art SVM training algorithms such as SMO and SVMLight. Next, a new regularization path-following algorithm for semi-definite kernels (Simple SVMPath) is shown to be orders of magnitude faster, more accurate, and significantly less complex than competing methods and does not require the use of external solvers. Theoretical analysis reveals new insights into the nature of the path-following algorithms. Finally, a method is given for computing the approximate regularization path and approximate kernel path using the warm-start capability of the proposed revised simplex method (SVM-RSQP) and shown to provide significant, orders of magnitude, speed-ups relative to the traditional grid search where re-training is performed at each parameter value. Surprisingly, it also shown that even when the solution for the entire path is not desired, computing the approximate path can be seen as a speed-up mechanism for obtaining the solution at a single value. New insights are given concerning the limiting behaviors of the regularization and kernel path as well as the use of low-rank kernel approximations.
237

AN UNMANNED AERIAL VEHICLE PROJECT FOR UNDERGRADUATES

Bradley, Justin, Prall, Breton 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / Brigham Young University recently introduced a project for undergraduates in which a miniature unmanned aerial vehicle system is constructed. The system is capable of autonomous flight, takeoff, landing, and navigation through a planned path. In addition, through the use of video and telemetry collected by the vehicle, accurate geolocation of specified targets is performed. This paper outlines our approach and successes in facilitating this accomplishment at the undergraduate level.
238

The Performance Evaluation of an OFDM-Based iNET Transceiver

Lu, Cheng, Roach, John 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The nXCVR-2000G transceiver is an 802.11a OFDM-based system undergoing performance studies that uses both simulation and laboratory tests. The multi-path channel model used in the simulation experiments is based on a telemetry multi-path channel model described in the iNET Telemetry Experimental Standard document. To date, the results using the simulation have been confirmed by outdoor laboratory tests. They show that multi-path has less impact on the OFDM performance when the channel spread is within a limit of 800ns; the same specified guard interval (GI) used by 802.11a. For example, with a channel spread of 144ns (τ1) and a reflection coefficient of -0.26dB (Γ1), the Error Vector Magnitude (EVM) is on the order of 2.5%. As the channel spread expands beyond the standard GI 800ns, the demodulated signal degrades. The performance penalty depends upon the channel spread factor and the total Signal to Interference plus Noise Ratio (SINR).
239

Geometry of numbers, class group statistics and free path lengths

Holmin, Samuel January 2015 (has links)
This thesis contains four papers, where the first two are in the area of geometry of numbers, the third is about class group statistics and the fourth is about free path lengths. A general theme throughout the thesis is lattice points and convex bodies. In Paper A we give an asymptotic expression for the number of integer matrices with primitive row vectors and a given nonzero determinant, such that the Euclidean matrix norm is less than a given large number. We also investigate the density of matrices with primitive rows in the space of matrices with a given determinant, and determine its asymptotics for large determinants. In Paper B we prove a sharp bound for the remainder term of the number of lattice points inside a ball, when averaging over a compact set of (not necessarily unimodular) lattices, in dimensions two and three. We also prove that such a bound cannot hold if one averages over the space of all lattices. In Paper C, we give a conjectural asymptotic formula for the number of imaginary quadratic fields with class number h, for any odd h, and a conjectural asymptotic formula for the number of imaginary quadratic fields with class group isomorphic to G, for any finite abelian p-group G where p is an odd prime. In support of our conjectures we have computed these quantities, assuming the generalized Riemann hypothesis and with the aid of a supercomputer, for all odd h up to a million and all abelian p-groups of order up to a million, thus producing a large list of “missing class groups.” The numerical evidence matches quite well with our conjectures. In Paper D, we consider the distribution of free path lengths, or the distance between consecutive bounces of random particles in a rectangular box. If each particle travels a distance R, then, as R → ∞ the free path lengths coincides with the distribution of the length of the intersection of a random line with the box (for a natural ensemble of random lines) and we determine the mean value of the path lengths. Moreover, we give an explicit formula for the probability density function in dimension two and three. In dimension two we also consider a closely related model where each particle is allowed to bounce N times, as N → ∞, and give an explicit formula for its probability density function. / <p>QC 20151204</p>
240

NCPA Optimizations at Gemini North Using Focal Plane Sharpening

Ball, Jesse Grant January 2016 (has links)
Non-common path aberrations (NCPA) in an adaptive optics system are static aberrations that appear due to the difference in optical path between light arriving at the wavefront sensor (WFS) and at the science detector. If the adaptive optics are calibrated to output an unaberrated wavefront, then any optics outside the path of the light arriving at the WFS inherently introduce aberrations to this corrected wavefront. NCPA corrections calibrate the adaptive optics system such that it outputs a wavefront that is inverse in phase to the aberrations introduced by these non-common path optics, and therefore arrives unaberrated at the science detector, rather than at the output of the corrective elements. Focal plane sharpening (FPS) is one technique used to calibrate for NCPA in adaptive optics systems. Small changes in shape to the deformable element(s) are implemented and images are taken and analyzed for image quality (IQ) on the science detector. This process is iterated until the image quality is maximized and hence the NCPA are corrected. The work carried out as described in this paper employs two FPS techniques at Gemini North to attempt to mitigate up to 33% of the adaptive optics performance and image quality degradations currently under investigation. Changes in the NCPA correction are made by varying the Zernike polynomial coefficients in the closed-loop correction file for Altair (the facility adaptive optics system). As these coefficients are varied during closed-loop operation, a calibration point-source at the focal plane of the telescope is imaged through Altair and NIRI (the facility near-infrared imager) at f/32 in K-prime (2.12 μm). These images are analyzed to determine the Strehl ratio, and a parabolic fit is used to determine the appropriate coefficient correction that maximizes the Strehl ratio. Historic calibrations of the NCPA file in Altair's control loop were done at night on a celestial point source, and used a separate, high-resolution WFS (with its own inherent aberrations not common to either NIRI nor Altair) to measure phase corrections directly. In this paper it is shown that using FPS on a calibration source negates both the need to use costly time on the night sky and the use of separate optical systems (which introduce their own NCPA) for analysis. An increase of 6% in Strehl ratio is achieved (an improvement over current NCPA corrections of 11%), and discussions of future improvements and extensions of the technique is presented. Furthermore, a potentially unknown problem is uncovered in the form of high spatial frequency degradation in the PSF of the calibration source.

Page generated in 0.5112 seconds