• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 9
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 18
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluating and optimizing the performance of real-time feedback-driven single particle tracking microscopes through the lens of information and optimal control

Vickers, Nicholas Andrew 17 January 2023 (has links)
Single particle tracking has become a ubiquitous class of tools in the study of biology at the molecular level. While the broad adoption of these techniques has yielded significant advances, it has also revealed the limitations of the methods. Most notable among these is that traditional single particle tracking is limited to imaging the particle at low temporal resolutions and small axial ranges. This restricts applications to slow processes confined to a plane. Biological processes in the cell, however, happen at multiple time scales and length scales. Real-time feedback-driven single particle tracking microscopes have emerged as one group of methods that can overcome these limitations. However, the development of these techniques has been ad-hoc and their performance has not been consistently analyzed in a way that enables comparisons across techniques, leading to incremental improvements on existing sets of tools, with no sense of fit or optimality with respect to SPT experimental requirements. This thesis addresses these challenges through three key questions : 1) What performance metrics are necessary to compare different techniques, allowing for easy selection of the method that best fits a particular application? 2) What is a procedure to design single particle tracking microscopes for the best performance?, and 3) How does one controllably and repeatably experimentally test single particle tracking performance on specific microscopes?. These questions are tackled in four thrusts: 1) a comprehensive review of real-time feedback-driven single particle tracking spectroscopy, 2) the creation of an optimization framework using Fisher information, 3) the design of a real-time feedback-driven single particle tracking microscope utilizing extremum seeking control, and 4) the development of synthetic motion, a protocol that provides biologically relevant known ground-truth particle motion to test single particle tracking microscopes and data analysis algorithms. The comprehensive review yields a unified view of single particle tracking microscopes and highlights two clear challenges, the photon budget and the control temporal budget, that work to limit the two key performance metrics, tracking duration and Fisher information. Fisher information provides a common framework to understand the elements of real-time feedback-driven single particle tracking microscopes, and the corresponding information optimization framework is a method to optimally design these microscopes towards an experimental aim. The thesis then expands an existing tracking algorithm to handle multiple particles through a multi-layer control architecture, and introduces REACTMIN, a new approach that reactively scans a minimum of light to overcome both the photon budget and the control temporal budget. This enables tracking durations up to hours, position localization down to a few nanometers, with temporal resolutions greater than 1 kHz. Finally, synthetic motion provides a repeatable and programmable method to test single particle tracking microscopes and algorithms with a known ground truth experiment. The performance of this method is analyzed in the presence of common actuator limitations. / 2024-01-16T00:00:00Z
12

Information Theoretical Measures for Achieving Robust Learning Machines

Zegers, Pablo, Frieden, B., Alarcón, Carlos, Fuentes, Alexis 12 August 2016 (has links)
Information theoretical measures are used to design, from first principles, an objective function that can drive a learning machine process to a solution that is robust to perturbations in parameters. Full analytic derivations are given and tested with computational examples showing that indeed the procedure is successful. The final solution, implemented by a robust learning machine, expresses a balance between Shannon differential entropy and Fisher information. This is also surprising in being an analytical relation, given the purely numerical operations of the learning machine.
13

Optimal sensing matrices

Achanta, Hema Kumari 01 December 2014 (has links)
Location information is of extreme importance in every walk of life ranging from commercial applications such as location based advertising and location aware next generation communication networks such as the 5G networks to security based applications like threat localization and E-911 calling. In indoor and dense urban environments plagued by multipath effects there is usually a Non Line of Sight (NLOS) scenario preventing GPS based localization. Wireless localization using sensor networks provides a cost effective and accurate solution to the wireless source localization problem. Certain sensor geometries show significantly poor performance even in low noise scenarios when triangulation based localization methods are used. This brings the need for the design of an optimum sensor placement scheme for better performance in the source localization process. The optimum sensor placement is the one that optimizes the underlying Fisher Information Matrix(FIM) . This thesis will present a class of canonical optimum sensor placements that produce the optimum FIM for N-dimensional source localization N greater than or equal to 2 for a case where the source location has a radially symmetric probability density function within a N-dimensional sphere and the sensors are all on or outside the surface of a concentric outer N-dimensional sphere. While the canonical solution that we designed for the 2D problem represents optimum spherical codes, the study of 3 or higher dimensional design provides great insights into the design of measurement matrices with equal norm columns that have the smallest possible condition number. Such matrices are of importance in compressed sensing based applications. This thesis also presents an optimum sensing matrix design for energy efficient source localization in 2D. Specifically, the results relate to the worst case scenario when the minimum number of sensors are active in the sensor network. We also propose a distributed control law that guides the motion of the sensors on the circumference of the outer circle so that achieve the optimum sensor placement with minimum communication overhead. The design of equal norm column sensing matrices has a variety of other applications apart from the optimum sensor placement for N-dimensional source localization. One such application is fourier analysis in Magnetic Resonance Imaging (MRI). Depending on the method used to acquire the MR image, one can choose an appropriate transform domain that transforms the MR image into a sparse image that is compressible. Some such transform domains include Wavelet Transform and Fourier Transform. The inherent sparsity of the MR images in an appropriately chosen transform domain, motivates one of the objectives of this thesis which is to provide a method for designing a compressive sensing measurement matrix by choosing a subset of rows from the Discrete Fourier Transform (DFT) matrix. This thesis uses the spark of the matrix as the design criterion. The spark of a matrix is defined as the smallest number of linearly dependent columns of the matrix. The objective is to select a subset of rows from the DFT matrix in order to achieve maximum spark. The design procedure leads us to an interest study of coprime conditions on the row indices chosen with the size of the DFT matrix.
14

Neural Networks and the Natural Gradient

Bastian, Michael R. 01 May 2010 (has links)
Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance.
15

Applied estimation theory on power cable as transmission line.

Mansour, Tony, Murtaja, Majdi January 2015 (has links)
This thesis presents how to estimate the length of a power cable using the MaximumLikelihood Estimate (MLE) technique by using Matlab. The model of the power cableis evaluated in the time domain with additive white Gaussian noise. The statistics havebeen used to evaluate the performance of the estimator, by repeating the experiment fora large number of samples where the random additive noise is generated for each sample.The estimated sample variance is compared to the theoretical Cramer Raw lower Bound(CRLB) for unbiased estimators. At the end of thesis, numerical results are presentedthat show when the resulting sample variance is close to the CRLB, and hence that theperformance of the estimator will be more accurate.
16

Photon Statistics in Scintillation Crystals

Bora, Vaibhav Joga Singh January 2015 (has links)
Scintillation based gamma-ray detectors are widely used in medical imaging, high-energy physics, astronomy and national security. Scintillation gamma-ray detectors are field-tested, relatively inexpensive, and have good detection efficiency. Semi-conductor detectors are gaining popularity because of their superior capability to resolve gamma-ray energies. However, they are relatively hard to manufacture and therefore, at this time, not available in as large formats and much more expensive than scintillation gamma-ray detectors. Scintillation gamma-ray detectors consist of: a scintillator, a material that emits optical (scintillation) photons when it interacts with ionization radiation, and an optical detector that detects the emitted scintillation photons and converts them into an electrical signal. Compared to semiconductor gamma-ray detectors, scintillation gamma-ray detectors have relatively poor capability to resolve gamma-ray energies. This is in large part attributed to the "statistical limit" on the number of scintillation photons. The origin of this statistical limit is the assumption that scintillation photons are either Poisson distributed or super-Poisson distributed. This statistical limit is often defined by the Fano factor. The Fano factor of an integer-valued random process is defined as the ratio of its variance to its mean. Therefore, a Poisson process has a Fano factor of one. The classical theory of light limits the Fano factor of the number of photons to a value greater than or equal to one (Poisson case). However, the quantum theory of light allows for Fano factors to be less than one. We used two methods to look at the correlations between two detectors looking at same scintillation pulse to estimate the Fano factor of the scintillation photons. The relationship between the Fano factor and the correlation between the integral of the two signals detected was analytically derived, and the Fano factor was estimated using the measurements for SrI₂:Eu, YAP:Ce and CsI:Na. We also found an empirical relationship between the Fano factor and the covariance as a function of time between two detectors looking at the same scintillation pulse. This empirical model was used to estimate the Fano factor of LaBr₃:Ce and YAP:Ce using the experimentally measured timing-covariance. The estimates of the Fano factor from the time-covariance results were consistent with the estimates of the correlation between the integral signals. We found scintillation light from some scintillators to be sub-Poisson. For the same mean number of total scintillation photons, sub-Poisson light has lower noise. We then conducted a simulation study to investigate whether this low-noise sub-Poisson light can be used to improve spatial resolution. We calculated the Cramér-Rao bound for different detector geometries, position of interactions and Fano factors. The Cramér-Rao calculations were verified by generating simulated data and estimating the variance of the maximum likelihood estimator. We found that the Fano factor has no impact on the spatial resolution in gamma-ray imaging systems.
17

Subpixel Image Co-Registration Using a Novel Divergence Measure

Wisniewski, Wit Tadeusz January 2006 (has links)
Sub-pixel image alignment estimation is desirable for co-registration of objects in multiple images to a common spatial reference and as alignment input to multi-image processing. Applications include super-resolution, image fusion, change detection, object tracking, object recognition, video motion tracking, and forensics.Information theoretical measures are commonly used for co-registration in medical imaging. The published methods apply Shannon's Entropy to the Joint Measurement Space (JMS) of two images. This work introduces into the same context a new set of statistical divergence measures derived from Fisher Information. The new methods described in this work are applicable to uncorrelated imagery and imagery that becomes statistically least dependent upon co-alignment. Both characteristics occur with multi-modal imagery and cause cross-correlation methods, as well as maximum dependence indicators, to fail. Fisher Information-based estimators, together as a set with an Entropic estimator, provide substantially independent information about alignment. This increases the statistical degrees of freedom, allowing for precision improvement and for reduced estimator failure rates compared to Entropic estimator performance alone.The new Fisher Information methods are tested for performance on real remotely-sensed imagery that includes Landsat TM multispectral imagery and ESR SAR imagery, as well as randomly generated synthetic imagery. On real imagery, the co-registration cost function is qualitatively examined for features that reveal the correct point of alignment. The alignment estimates agree with manual alignment to within manual alignment precision. Alignment truth in synthetic imagery is used to quantitatively evaluate co-registration accuracy. The results from the new Fisher Information-based algorithms are compared to Entropy-based Mutual Information and correlation methods revealing equal or superior precision and lower failure rate at signal-to-noise ratios below one.
18

Applications of Information Inequalities to Linear Systems : Adaptive Control and Security

Ziemann, Ingvar January 2021 (has links)
This thesis considers the application of information inequalities, Cramér-Rao type bounds, based on Fisher information, to linear systems. These tools are used to study the trade-offs between learning and performance in two application areas: adaptive control and control systems security. In the first part of the thesis, we study stochastic adaptive control of linear quadratic regulators (LQR). Here, information inequalities are used to derive instance-dependent  regret lower bounds. First, we consider a simplified version of LQR, a memoryless reference tracking model, and show how regret can be linked to a cumulative estimation error. This is then exploited to derive a regret lower bound in terms of the Fisher information generated by the experiment of the optimal policy. It is shown that if the optimal policy has ill-conditioned Fisher information, then so does any low-regret policy. This is combined with a Cramér-Rao bound to give a regret lower bound on the order of magnitude square-root T in the time-horizon  for a class of instances we call uninformative. The lower bound holds for all policies which depend smoothly on the underlying parametrization. Second, we extend these results to the general LQR model, and to arbitrary affine parametrizations of the instance parameters. The notion of uninformativeness is generalized to this situation to give a structure-dependent rank condition for when logarithmic regret is impossible. This is done by reduction of regret to a cumulative Bellman error. Due to the quadratic nature of LQR, this Bellman error turns out to be a quadratic form, which again can be interpreted as an estimation error. Using this, we prove a local minimax regret lower bound, of which the proof relies on relating the minimax regret to a Bayesian estimation problem, and then using Van Trees' inequality. Again, it is shown that an appropriate information quantity of any low regret policy is similar to that of the optimal policy and that any uninformative instance suffers local minimax regret at least on the order of magnitude square-root T. Moreover, it shown that the notion of uninformativeness when specialized to certain well-understood scenarios yields a tight characterization of square-root-regret. In the second part of this thesis, we study control systems security problems from a Fisher information point of view. First, we consider a secure state estimation problem and characterize the maximal impact an adversary can cause by means of least informative distributions -- those which maximize the Cramér-Rao bound. For a linear measurement equation, it is shown that the least informative distribution, subjected to variance and sparsity constraints, can be solved for by a semi-definite program, which becomes mixed-integer in the presence of sparsity constraints. Furthermore, by relying on well-known results on minimax and robust estimation, a game-theoretic interpretation for this characterization of the maximum impact is offered. Last, we consider a Fisher information regularized minimum variance control objective, to study the trade-offs between parameter privacy and control performance. It is noted that this can be motivated for instance by learning-based attacks, in which case one seeks to leak as little information as possible to a system-identification adversary. Supposing that the feedback law is linear, the noise distribution minimizing the trace of Fisher information subject to a state variance penalty is found to be conditionally Gaussian. / <p>QC 20210310</p><p>QC 20210310</p>
19

Fisher Information Test of Normality

Lee, Yew-Haur Jr. 21 September 1998 (has links)
An extremal property of normal distributions is that they have the smallest Fisher Information for location among all distributions with the same variance. A new test of normality proposed by Terrell (1995) utilizes the above property by finding that density of maximum likelihood constrained on having the expected Fisher Information under normality based on the sample variance. The test statistic is then constructed as a ratio of the resulting likelihood against that of normality. Since the asymptotic distribution of this test statistic is not available, the critical values for n = 3 to 200 have been obtained by simulation and smoothed using polynomials. An extensive power study shows that the test has superior power against distributions that are symmetric and leptokurtic (long-tailed). Another advantage of the test over existing ones is the direct depiction of any deviation from normality in the form of a density estimate. This is evident when the test is applied to several real data sets. Testing of normality in residuals is also investigated. Various approaches in dealing with residuals being possibly heteroscedastic and correlated suffer from a loss of power. The approach with the fewest undesirable features is to use the Ordinary Least Squares (OLS) residuals in place of independent observations. From simulations, it is shown that one has to be careful about the levels of the normality tests and also in generalizing the results. / Ph. D.
20

Mathematical modeling with applications in high-performance coding

Su, Yong 10 October 2005 (has links)
No description available.

Page generated in 0.1271 seconds