• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

LORE Approach for Phased Array Measurements and Noise Control of Landing Gears

Ravetta, Patricio A. 29 December 2005 (has links)
A novel concept in noise control devices for landing gears is presented. These devices consist of elastic membranes creating a fairing around the major noise sources. The purpose of these devices is to reduce wake interactions and to hide components from the flow, thus, reducing the noise emission. The design of these fairings was focused on the major noise sources identified in a 777 main landing gear. To find the major noise sources, an extensive noise source identification process was performed using phased arrays. To this end, phased array technologies were developed and a 26%-scale 777 main landing gear model was tested at the Virginia Tech Stability Wind Tunnel. Since phased array technologies present some issues leading to misinterpretation of results and inaccuracy in determining actual levels, a new approach to the deconvolution of acoustic sources has been developed. The goal of this post-processing is to "simplify" the beamforming output by suppressing the sidelobes and reducing the sources mainlobe to a small number of points that accurately identify the noise sources position and their actual levels. To this end, the beamforming output is modeled as a superposition of "complex" point spread functions and a nonlinear system of equations is posted. Such system is solved using a new 2-step procedure. In the first step an approximated linear problem is solved, while in the second step an optimization is performed over the nonzero values obtained in the previous step. The solution to this system of equations renders the sources position and amplitude. The technique is called: noise source Localization and Optimization of Array Results (LORE). Numerical simulations as well as sample experimental results are shown for the proposed post-processing. / Ph. D.
132

Machine Learning Approaches for Modeling and Correction of Confounding Effects in Complex Biological Data

Wu, Chiung Ting 09 June 2021 (has links)
With the huge volume of biological data generated by new technologies and the booming of new machine learning based analytical tools, we expect to advance life science and human health at an unprecedented pace. Unfortunately, there is a significant gap between the complex raw biological data from real life and the data required by mathematical and statistical tools. This gap is contributed by two fundamental and universal problems in biological data that are both related to confounding effects. The first is the intrinsic complexities of the data. An observed sample could be the mixture of multiple underlying sources and we may be only interested in one or part of the sources. The second type of complexities come from the acquisition process of the data. Different samples may be gathered at different time and/or from different locations. Therefore, each sample is associated with specific distortion that must be carefully addressed. These confounding effects obscure the signals of interest in the acquired data. Specifically, this dissertation will address the two major challenges in confounding effects removal: alignment and deconvolution. Liquid chromatography–mass spectrometry (LC-MS) is a standard method for proteomics and metabolomics analysis of biological samples. Unfortunately, it suffers from various changes in the retention time (RT) of the same compound in different samples, and these must be subsequently corrected (aligned) during data processing. Classic alignment methods such as in the popular XCMS package often assume a single time-warping function for each sample. Thus, the potentially varying RT drift for compounds with different masses in a sample is neglected in these methods. Moreover, the systematic change in RT drift across run order is often not considered by alignment algorithms. Therefore, these methods cannot effectively correct all misalignments. To utilize this information, we develop an integrated reference-free profile alignment method, neighbor-wise compound-specific Graphical Time Warping (ncGTW), that can detect misaligned features and align profiles by leveraging expected RT drift structures and compound-specific warping functions. Specifically, ncGTW uses individualized warping functions for different compounds and assigns constraint edges on warping functions of neighboring samples. We applied ncGTW to two large-scale metabolomics LC-MS datasets, which identifies many misaligned features and successfully realigns them. These features would otherwise be discarded or uncorrected using existing methods. When the desired signal is buried in a mixture, deconvolution is needed to recover the pure sources. Many biological questions can be better addressed when the data is in the form of individual sources, instead of mixtures. Though there are some promising supervised deconvolution methods, when there is no a priori information, unsupervised deconvolution is still needed. Among current unsupervised methods, Convex Analysis of Mixtures (CAM) is the most theoretically solid and strongest performing one. However, there are some major limitations of this method. Most importantly, the overall time complexity can be very high, especially when analyzing a large dataset or a dataset with many sources. Also, since there are some stochastic and heuristic steps, the deconvolution result is not accurate enough. To address these problems, we redesigned the modules of CAM. In the feature clustering step, we propose a clustering method, radius-fixed clustering, which could not only control the space size of the cluster, but also find out the outliers simultaneously. Therefore, the disadvantages of K-means clustering, such as instability and the need of cluster number are avoided. Moreover, when identifying the convex hull, we replace Quickhull with linear programming, which decreases the computation time significantly. To avoid the not only heuristic but also approximated step in optimal simplex identification, we propose a greedy search strategy instead. The experimental results demonstrate the vast improvement of computation time. The accuracy of the deconvolution is also shown to be higher than the original CAM. / Doctor of Philosophy / Due to the complexity of biological data, there are two major pre-processing steps: alignment and deconvolution. The alignment step corrects the time and location related data acquisition distortion by aligning the detected signals to a reference signal. Though many alignment methods are proposed for biological data, most of them fail to consider the relationships among samples carefully. This piece of structure information can help alignment when the data is noisy and/or irregular. To utilize this information, we develop a new method, Neighbor-wise Compound-specific Graphical Time Warping (ncGTW), inspired by graph theory. This new alignment method not only utilizes the structural information but also provides a reference-free solution. We show that the performance of our new method is better than other methods in both simulations and real datasets. When the signal is from a mixture, deconvolution is needed to recover the pure sources. Many biological questions can be better addressed when the data is in the form of single sources, instead of mixtures. There is a classic unsupervised deconvolution method: Convex Analysis of Mixtures (CAM). However, there are some limitations of this method. For example, the time complexity of some steps is very high. Thus, when facing a large dataset or a dataset with many sources, the computation time would be extremely long. Also, since there are some stochastic and heuristic steps, the deconvolution result may be not accurate enough. We improved CAM and the experimental results show that the speed and accuracy of the deconvolution is significantly improved.
133

Ultra-Wideband for Communications: Spatial Characteristics and Interference Suppression

Bharadwaj, Vivek 21 June 2005 (has links)
Ultra-Wideband Communication is increasingly being considered as an attractive solution for high data rate short range wireless and position location applications. Knowledge of the statistical nature of the channel is necessary to design wireless systems that provide optimum performance. This thesis investigates the spatial characteristics of the channel based on measurements conducted using UWB pulses in an indoor office environment. The statistics of the received signal energy illustrate the low spatial fading of UWB signals. The distribution of the Angle of arrival (AOA) of the multipath components is obtained using a two-dimensional deconvolution algorithm called the Sensor-CLEAN algorithm. A spatial channel model that incorporates the spatial and temporal features of the channel is developed based on the AOA statistics. The performance of the Sensor-CLEAN algorithm is evaluated briefly by application to known artificial channels. UWB systems co-exist with narrowband and other wideband systems. Even though they enjoy the advantage of processing gain (the ratio of bandwidth to data rate) the low energy per pulse may cause these narrow band interferers (NBI) to severely degrade the UWB system's performance. A technique to suppress NBI using multiple antennas is presented in this thesis which exploits the spatial fading characteristics. This method exploits the vast difference in fading characteristics between UWB signals and NBI by implementing a simple selection diversity scheme. It is shown that this simple scheme can provide strong benefits in performance. / Master of Science
134

Approximate Deconvolution Reduced Order Modeling

Xie, Xuping 01 February 2016 (has links)
This thesis proposes a large eddy simulation reduced order model (LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition (POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution (AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient ( ν= 10⁻³). / Master of Science
135

Filter Based Stabilization Methods for Reduced Order Models of Convection-Dominated Systems

Moore, Ian Robert 15 May 2023 (has links)
In this thesis, I examine filtering based stabilization methods to design new regularized reduced order models (ROMs) for under-resolved simulations of unsteady, nonlinear, convection-dominated systems. The new ROMs proposed are variable delta filtering applied to the evolve-filter-relax ROM (V-EFR ROM), variable delta filtering applied to the Leray ROM, and approximate deconvolution Leray ROM (ADL-ROM). They are tested in the numerical setting of Burgers equation, a nonlinear, time dependent problem with one spatial dimension. Regularization is considered for the low viscosity, convection dominated setting. / Master of Science / Numerical solutions of partial differential equations may not be able to be efficiently computed in a way that fully captures the true behavior of the underlying model or differential equation, especially if significant changes in the solution to the differential equation occur over a very small spatial area. In this case, non-physical numerical artifacts may appear in the computed solution. We discuss methods of treating these calculations with a goal of improving the fidelity of numerical solutions with respect to the original model.
136

Mathematical Modeling and Deconvolution for Molecular Characterization of Tissue Heterogeneity

Chen, Lulu 22 January 2020 (has links)
Tissue heterogeneity, arising from intermingled cellular or tissue subtypes, significantly obscures the analyses of molecular expression data derived from complex tissues. Existing computational methods performing data deconvolution from mixed subtype signals almost exclusively rely on supervising information, requiring subtype-specific markers, the number of subtypes, or subtype compositions in individual samples. We develop a fully unsupervised deconvolution method to dissect complex tissues into molecularly distinctive tissue or cell subtypes directly from mixture expression profiles. We implement an R package, deconvolution by Convex Analysis of Mixtures (debCAM) that can automatically detect tissue or cell-specific markers, determine the number of constituent sub-types, calculate subtype proportions in individual samples, and estimate tissue/cell-specific expression profiles. We demonstrate the performance and biomedical utility of debCAM on gene expression, methylation, and proteomics data. With enhanced data preprocessing and prior knowledge incorporation, debCAM software tool will allow biologists to perform a deep and unbiased characterization of tissue remodeling in many biomedical contexts. Purified expression profiles from physical experiments provide both ground truth and a priori information that can be used to validate unsupervised deconvolution results or improve supervision for various deconvolution methods. Detecting tissue or cell-specific expressed markers from purified expression profiles plays a critical role in molecularly characterizing and determining tissue or cell subtypes. Unfortunately, classic differential analysis assumes a convenient test statistic and associated null distribution that is inconsistent with the definition of markers and thus results in a high false positive rate or lower detection power. We describe a statistically-principled marker detection method, One Versus Everyone Subtype Exclusively-expressed Genes (OVESEG) test, that estimates a mixture null distribution model by applying novel permutation schemes. Validated with realistic synthetic data sets on both type 1 error and detection power, OVESEG-test applied to benchmark gene expression data sets detects many known and de novo subtype-specific expressed markers. Subsequent supervised deconvolution results, obtained using markers detected by the OVESEG-test, showed superior performance when compared with popular peer methods. While the current debCAM approach can dissect mixed signals from multiple samples into the 'averaged' expression profiles of subtypes, many subsequent molecular analyses of complex tissues require sample-specific deconvolution where each sample is a mixture of 'individualized' subtype expression profiles. The between-sample variation embedded in sample-specific subtype signals provides critical information for detecting subtype-specific molecular networks and uncovering hidden crosstalk. However, sample-specific deconvolution is an underdetermined and challenging problem because there are more variables than observations. We propose and develop debCAM2.0 to estimate sample-specific subtype signals by nuclear norm regularization, where the hyperparameter value is determined by random entry exclusion based cross-validation scheme. We also derive an efficient optimization approach based on ADMM to enable debCAM2.0 application in large-scale biological data analyses. Experimental results on realistic simulation data sets show that debCAM2.0 can successfully recover subtype-specific correlation networks that is unobtainable otherwise using existing deconvolution methods. / Doctor of Philosophy / Tissue samples are essentially mixtures of tissue or cellular subtypes where the proportions of individual subtypes vary across different tissue samples. Data deconvolution aims to dissect tissue heterogeneity into biologically important subtypes, their proportions, and their marker genes. The physical solution to mitigate tissue heterogeneity is to isolate pure tissue components prior to molecular profiling. However, these experimental methods are time-consuming, expensive and may alter the expression values during isolation. Existing literature primarily focuses on supervised deconvolution methods which require a priori information. This approach has an inherent problem as it relies on the quality and accuracy of the a priori information. In this dissertation, we propose and develop a fully unsupervised deconvolution method - deconvolution by Convex Analysis of Mixtures (debCAM) that can estimate the mixing proportions and 'averaged' expression profiles of individual subtypes present in heterogeneous tissue samples. Furthermore, we also propose and develop debCAM2.0 that can estimate 'individualized' expression profiles of participating subtypes in complex tissue samples. Subtype-specific expressed markers, or marker genes (MGs), serves as critical a priori information for supervised deconvolution. MGs are exclusively and consistently expressed in a particular tissue or cell subtype while detecting such unique MGs involving many subtypes constitutes a challenging task. We propose and develop a statistically-principled method - One Versus Everyone Subtype Exclusively-expressed Genes (OVESEG-test) for robust detection of MGs from purified profiles of many subtypes.
137

Improving fMRI Classification Through Network Deconvolution

Martinek, Jacob 01 January 2015 (has links) (PDF)
The structure of regional correlation graphs built from fMRI-derived data is frequently used in algorithms to automatically classify brain data. Transformation on the data is performed during pre-processing to remove irrelevant or inaccurate information to ensure that an accurate representation of the subject's resting-state connectivity is attained. Our research suggests and confirms that such pre-processed data still exhibits inherent transitivity, which is expected to obscure the true relationships between regions. This obfuscation prevents known solutions from developing an accurate understanding of a subject’s functional connectivity. By removing correlative transitivity, connectivity between regions is made more specific and automated classification is expected to improve. The task of utilizing fMRI to automatically diagnose Attention Deficit/Hyperactivity Disorder was posed by the ADHD-200 Consortium in a competition to draw in researchers and new ideas from outside of the neuroimaging discipline. Researchers have since worked with the competition dataset to produce ever-increasing detection rates. Our approach was empirically tested with a known solution to this problem to compare processing of treated and untreated data, and the detection rates were shown to improve in all cases with a weighted average increase of 5.88%.
138

A framework for blind signal correction using optimized polyspectra-based cost functions

Braeger, Steven W. 01 January 2009 (has links)
"Blind" inversion of the effects of a given operator on a signal is an extremely difficult task that has no easy solutions. However,. Dr. Hany Farid has published several works that each individua:lly appear to achieve exactly this seemingly impossible result. In this work, we contribute a comprehensive overview of the published applications of blind process inversion, as well as provide the generalized form of the algorithms and requirements that are found in each of these applications, thereby formulating and explaining a general framework for blind process inversion using Farid's Algorithm. Additionally, we explain the knowledge required to derive the ROSA-based cost function on which Farid's Algorithm depends. As our primary contribution, we analyze the algorithmic complexity of this cost function based on the way it is currently, naively calculated, and derive a new algorithm to compute this cost function that has greatly reduced algorithmic complexity. Finally, we suggest an additional application of Farid's Algorithm to the problem of blindly estimating true camera response functions from a single image.
139

Analysis of Internal Boundaries and Transition Regions in Geophysical Systems with Advanced Processing Techniques

Krützmann, Nikolai Christian January 2013 (has links)
This thesis examines the utility of the Rényi entropy (RE), a measure of the complexity of probability density functions, as a tool for finding physically meaningful patterns in geophysical data. Initially, the RE is applied to observational data of long-lived atmospheric tracers in order to analyse the dynamics of stratospheric transitions regions associated with barriers to horizontal mixing. Its wider applicability is investigated by testing the RE as a method for highlighting internal boundaries in snow and ice from ground penetrating radar (GPR) recordings. High-resolution 500 MHz GPR soundings of dry snow were acquired at several sites near Scott Base, Antarctica, in 2008 and 2009, with the aim of using the RE to facilitate the identification and tracking of subsurface layers to extrapolate point measurements of accumulation from snow pits and firn cores to larger areas. The atmospheric analysis focuses on applying the RE to observational tracer data from the EOS-MLS satellite instrument. Nitrous oxide (N2O) is shown to exhibit subtropical RE maxima in both hemispheres. These peaks are a measure of the tracer gradients that mark the transition between the tropics and the mid-latitudes in the stratosphere, also referred to as the edges of the tropical pipe. The RE maxima are shown to be located closer to the equator in winter than in summer. This agrees well with the expected behaviour of the tropical pipe edges and is similar to results reported by other studies. Compared to other stratospheric mixing metrics, the RE has the advantage that it is easy to calculate as it does not, for example, require conversion to equivalent latitude and does not rely on dynamical information such as wind fields. The RE analysis also reveals occasional sudden poleward shifts of the southern hemisphere tropical pipe edge during austral winter which are accompanied by increased mid-latitude N2O levels. These events are investigated in more detail by creating daily high-resolution N2O maps using a two-dimensional trajectory model and MERRA reanalysis winds to advect N2O observations forwards and backwards in time on isentropic surfaces. With the aid of this ‘domain filling’ technique it is illustrated that the increase in southern hemisphere mid-latitude N2O during austral winter is probably the result of the cumulative effect of several large-scale, episodic leaks of N2O-rich air from the tropical pipe. A comparison with the global distribution of potential vorticity strongly suggests that irreversible mixing related to planetary wave breaking is the cause of the leak events. Between 2004 and 2011 the large-scale leaks are shown to occur approximately every second year and a connection to the equatorial quasi-biennial oscillation is found to be likely, though this cannot be established conclusively due to the relatively short data set. Identification and tracking of subsurface boundaries, such as ice layers in snow or the bedrock of a glacier, is the focus of the cryospheric part of this project. The utility of the RE for detecting amplitude gradients associated with reflections in GPR recordings is initially tested on a 25 MHz sounding of an Antarctic glacier. The results show distinct regions of increased RE values that allow identification of the glacial bedrock along large parts of the profile. Due to the low computational requirements, the RE is found to be an effective pseudo gain function for initial analysis of GPR data in the field. While other gain functions often have to be tuned to give a good contrast between reflections and background noise over the whole vertical range of a profile, the RE tends to assign all detectable amplitude gradients a similar (high) value, resulting in a clear contrast between reflections and background scattering. Additionally, theoretical considerations allow the definition of a ‘standard’ data window size with which the RE can be applied to recordings made by most pulsed GPR systems and centre frequencies. This is confirmed by tests with higher frequency recordings (50 and 500 MHz) acquired on the McMurdo Ice Shelf. However, these also reveal that the RE processing is less reliable for identifying more closely spaced reflections from internal layers in dry snow. In order to complete the intended high-resolution analysis of accumulation patterns by tracking internal snow layers in the 500 MHz data from two test sites, a different processing approach is developed. Using an estimate of the emitted waveform from direct measurement, deterministic deconvolution via the Fourier domain is applied to the high-resolution GPR data. This reveals unambiguous reflection horizons which can be observed in repeat measurements made one year apart. Point measurements of average accumulation from snow pits and firn cores are extrapolated to larger areas by identifying and tracking a dateable dust layer horizon in the radargrams. Furthermore, it is shown that annual compaction rates of snow can be estimated by tracking several internal reflection horizons along the deconvolved radar profiles and calculating the average change in separation of horizon pairs from one year to the next. The technique is complementary to point measurements from other studies and the derived compaction rates agree well with published values and theoretical estimates.
140

Sparsity Motivated Auditory Wavelet Representation and Blind Deconvolution

Adiga, Aniruddha January 2017 (has links) (PDF)
In many scenarios, events such as singularities and transients that carry important information about a signal undergo spreading during acquisition or transmission and it is important to localize the events. For example, edges in an image, point sources in a microscopy or astronomical image are blurred by the point-spread function (PSF) of the acquisition system, while in a speech signal, the epochs corresponding to glottal closure instants are shaped by the vocal tract response. Such events can be extracted with the help of techniques that promote sparsity, which enables separation of the smooth components from the transient ones. In this thesis, we consider development of such sparsity promoting techniques. The contributions of the thesis are three-fold: (i) an auditory-motivated continuous wavelet design and representation, which helps identify singularities; (ii) a sparsity-driven deconvolution technique; and (iii) a sparsity-driven deconvolution technique for reconstruction of nite-rate-of-innovation (FRI) signals. We use the speech signal for illustrating the performance of the techniques in the first two parts and super-resolution microscopy (2-D) for the third part. In the rst part, we develop a continuous wavelet transform (CWT) starting from an auditory motivation. Wavelet analysis provides good time and frequency localization, which has made it a popular tool for time-frequency analysis of signals. The CWT is a multiresolution analysis tool that involves decomposition of a signal using a constant-Q wavelet filterbank, akin to the time-frequency analysis performed by basilar membrane in the peripheral human auditory system. This connection motivated us to develop wavelets that possess auditory localization capabilities. Gammatone functions are extensively used in the modeling of the basilar membrane, but the non-zero average of the functions poses a hurdle. We construct bona de wavelets from the Gammatone function called Gammatone wavelets and analyze their properties such as admissibility, time-bandwidth product, vanishing moments, etc.. Of particular interest is the vanishing moments property, which enables the wavelet to suppress smooth regions in a signal leading to sparsi cation. We show how this property of the Gammatone wavelets coupled with multiresolution analysis could be employed for singularity and transient detection. Using these wavelets, we also construct equivalent lterbank models and obtain cepstral feature vectors out of such a representation. We show that the Gammatone wavelet cepstral coefficients (GWCC) are effective for robust speech recognition compared with mel-frequency cepstral coefficients (MFCC). In the second part, we consider the problem of sparse blind deconvolution (SBD) starting from a signal obtained as the convolution of an unknown PSF and a sparse excitation. The BD problem is ill-posed and the goal is to employ sparsity to come up with an accurate solution. We formulate the SBD problem within a Bayesian framework. The estimation of lter and excitation involves optimization of a cost function that consists of an `2 data- fidelity term and an `p-norm (p 2 [0; 1]) regularizer, as the sparsity promoting prior. Since the `p-norm is not differentiable at the origin, we consider a smoothed version of the `p-norm as a proxy in the optimization. Apart from the regularizer being non-convex, the data term is also non-convex in the filter and excitation as they are both unknown. We optimize the non-convex cost using an alternating minimization strategy, and develop an alternating `p `2 projections algorithm (ALPA). We demonstrate convergence of the iterative algorithm and analyze in detail the role of the pseudo-inverse solution as an initialization for the ALPA and provide probabilistic bounds on its accuracy considering the presence of noise and the condition number of the linear system of equations. We also consider the case of bounded noise and derive tight tail bounds using the Hoe ding inequality. As an application, we consider the problem of blind deconvolution of speech signals. In the linear model for speech production, voiced speech is assumed to be the result of a quasi-periodic impulse train exciting a vocal-tract lter. The locations of the impulses or epochs indicate the glottal closure instants and the spacing between them the pitch. Hence, the excitation in the case of voiced speech is sparse and its deconvolution from the vocal-tract filter is posed as a SBD problem. We employ ALPA for SBD and show that excitation obtained is sparser than the excitations obtained using sparse linear prediction, smoothed `1=`2 sparse blind deconvolution algorithm, and majorization-minimization-based sparse deconvolution techniques. We also consider the problem of epoch estimation and show that epochs estimated by ALPA in both clean and noisy conditions are closer to the instants indicated by the electroglottograph when with to the estimates provided by the zero-frequency ltering technique, which is the state-of-the-art epoch estimation technique. In the third part, we consider the problem of deconvolution of a specific class of continuous-time signals called nite-rate-of-innovation (FRI) signals, which are not bandlimited, but specified by a nite number of parameters over an observation interval. The signal is assumed to be a linear combination of delayed versions of a prototypical pulse. The reconstruction problem is posed as a 2-D SBD problem. The kernel is assumed to have a known form but with unknown parameters. Given the sampled version of the FRI signal, the delays quantized to the nearest point on the sampling grid are rst estimated using proximal-operator-based alternating `p `2 algorithm (ALPAprox), and then super-resolved to obtain o -grid (O. G.) estimates using gradient-descent optimization. The overall technique is termed OG-ALPAprox. We show application of OG-ALPAprox to a particular modality of super-resolution microscopy (SRM), called stochastic optical reconstruction microscopy (STORM). The resolution of the traditional optical microscope is limited by di raction and is termed as Abbe's limit. The goal of SRM is to engineer the optical imaging system to resolve structures in specimens, such as proteins, whose dimensions are smaller than the di raction limit. The specimen to be imaged is tagged or labeled with light-emitting or uorescent chemical compounds called uorophores. These compounds speci cally bind to proteins and exhibit uorescence upon excitation. The uorophores are assumed to be point sources and the light emitted by them undergo spreading due to di raction. STORM employs a sequential approach, wherein each step only a few uorophores are randomly excited and the image is captured by a sensor array. The obtained image is di raction-limited, however, the separation between the uorophores allows for localizing the point sources with high precision. The localization is performed using Gaussian peak- tting. This process of random excitation coupled with localization is performed sequentially and subsequently consolidated to obtain a high-resolution image. We pose the localization as a SBD problem and employ OG-ALPAprox to estimate the locations. We also report comparisons with the de facto standard Gaussian peak- tting algorithm and show that the statistical performance is superior. Experimental results on real data show that the reconstruction quality is on par with the Gaussian peak- tting.

Page generated in 0.1284 seconds