Spelling suggestions: "subject:"point"" "subject:"joint""
201 |
Contribution to fluorescence microscopy, 3D thick samples deconvolution and depth-variant PSF / Contribution à la microscopie de fluorescence, Deconvolution des échantillons épais avec PSF variables en profondeurMaalouf, Elie 20 December 2010 (has links)
La reconstruction 3D par coupes sériées en microscopie optique est un moyen efficace pour étudier des spécimens biologiques fluorescents. Dans un tel système, la formation d'une image peut être représentée comme une convolution linéaire d'un objet avec une réponse impulsionnelle optique de l'instrument (PSF). Pour une étude quantitative, une estimation de l'objet doit être calculée en utilisant la déconvolution qui est le phénomène inverse de la convolution. Plusieurs algorithmes de déconvolution ont été développés en se basant sur des modèles statistiques ou par inversion directe, mais ces algorithmes se basent sur la supposition de l'invariance spatiale de la PSF pour simplifier et accélérer le processus. Dans certaines configurations optiques la PSF 3D change significativement en profondeur et ignorer ces changements implique des erreurs quantitatives dans l'estimation. Nous proposons un algorithme (EMMA) qui se base sur une hypothèse où l'erreur minimale sur l'estimation par un algorithme ne tenant pas compte de la non-invariance, se situe aux alentours de la position (profondeur) de la PSF utilisée. EMMA utilise des PSF à différentes positions et fusionne les différentes estimations en utilisant des masques d'interpolation linéaires adaptatifs aux positions des PSF utilisées. Pour obtenir des PSF à différentes profondeurs, un algorithme d'interpolation de PSF a également été développé. La méthode consiste à décomposer les PSF mesurées en utilisant les moments de Zernike pseudo-3D, puis les variations de chaque moment sont approximés par une fonction polynomiale. Ces fonctions polynomiales sont utilisées pour interpoler des PSF aux profondeurs voulues. / The 3-D fluorescence microscope has become the method of choice in biological sciences for living cells study. However, the data acquired with conventional3-D fluorescence microscopy are not quantitatively significant because of distortions induced by the optical acquisition process. Reliable measurements need the correction of theses distortions. Knowing the instrument impulse response, also known as the PSF, one can consider the backward process of convolution induced by the microscope, known as "deconvolution". However, when the system response is not invariant in the observation field, the classical algorithms can introduce large errors in the results. In this thesis we propose a new approach, which can be easily adapted to any classical deconvolution algorithm, direct or iterative, for bypassing the non-invariance PSF problem, without any modification to the later. Based on the hypothesis that the minimal error in a restored image using non-invariance assumption is located near the used PSF position, the EMMA (Evolutive Merging Masks Algorithm) blends multiple deconvolutions in the invariance assumption using a specific merging mask set. In order to obtain sufficient number of measured PSF at various depths for a better restoration using EMMA (or any other depth-variant deconvolution algorithm) we propose a 3D PSF interpolation algorithm based on the image moments theory using Zernike polynomials as decomposition base. The known PSF are decomposed into Zernike moments set and each moment's variation is fitted into a polynomial function, the resulting functions are then used to interpolate the needed PSF's Zernike moments set to reconstruct the interpolated PSF.
|
202 |
Etude théorique du second point critique dans le gaz de Bose / Theoretical study of the second critical point in the Bose gasBeau, Mathieu 01 October 2010 (has links)
Cette thèse présente une description nouvelle et les conséquences physiques de la seconde transition pour les gaz parfait de Bose dans des milieux fortement anisotropes. Nous développons ainsi dans le chapitre 1 une approche dite d'échelle qui permet de revisiter les différents concepts autour de la condensation de Bose-Einstein : la condensation généralisée (M.van den Berg, J.Lewis, J.Pulé, 1986), les cycles infinis (R.Feynmann, 1953) et les corrélation à longue portée (O.Penrose, L. Onsager, 1956). Cette nouvelle approche nous permet, dans un premier temps, de montrer l'équivalence entre ces critères de condensation et entre les différentes classifications de condensats. Ensuite, dans les chapitres 2 et 3, nous caractérisons, via notre méthode, les effets physiques (nouvelle température critique, modification des fractions condensées, localisations énergétiques et longueurs de cohérence) pour les gaz de Bose dans des boîtes quasi-2D (Ch2) et des pièges harmoniques quasi-1D (Ch3) exponentiellement anisotropes. Dans le chapitre 4, nous discutons principalement l'analogie entre cycles et polymère à la P.-G de Gennes que fourni notre description des cycles via notre méthode d'échelle / This thesis presents a description and new physical results about the second transition for the Bose ideal gas i n strongly anisotropes systems. Thus, we develop in Chapter 1 an approach of scale that allowq us to revisit the concepts around the Bose-Einstein condensation : generalised condesation (M. van den Berg, J. Lewis, J. Pule, 1986), infinite cycles (R. Feynman, 1953) and off-diagonal-long-range order (O. Penrose, L. Osager, 1956). This new approach allows us, initially, to show equivalence between these criteria are condensation between different classifications of condensates. The, in Chapter 2 and 3, we characterize using our method, the physical (new critical temperature, changing fractions condensed localization energy and coherence lenghts) for the Bose gas cans in quasi-2D (Ch2) and of quasi-1D harmonic traps (Ch3) exponentially anisotropic. In Chapter 4, we discuss mainly the analogy between cycles and the polymer (P.-G de Gennes description), using our scaling argument for cycles.
|
203 |
Characterizing the Geometry of a Random Point CloudUnknown Date (has links)
This thesis is composed of three main parts. Each chapter is concerned with
characterizing some properties of a random ensemble or stochastic process. The
properties of interest and the methods for investigating them di er between chapters.
We begin by establishing some asymptotic results regarding zeros of random
harmonic mappings, a topic of much interest to mathematicians and astrophysicists
alike. We introduce a new model of harmonic polynomials based on the so-called
"Weyl ensemble" of random analytic polynomials. Building on the work of Li and
Wei [28] we obtain precise asymptotics for the average number of zeros of this model.
The primary tools used in this section are the famous Kac-Rice formula as well as
classical methods in the asymptotic analysis of integrals such as the Laplace method.
Continuing, we characterize several topological properties of this model of
harmonic polynomials. In chapter 3 we obtain experimental results concerning the
number of connected components of the orientation-reversing region as well as the geometry
of the distribution of zeros. The tools used in this section are primarily Monte
Carlo estimation and topological data analysis (persistent homology). Simulations in this section are performed within MATLAB with the help of a computational homology
software known as Perseus. While the results in this chapter are empirical rather
than formal proofs, they lead to several enticing conjectures and open problems.
Finally, in chapter 4 we address an industry problem in applied mathematics
and machine learning. The analysis in this chapter implements similar techniques to
those used in chapter 3. We analyze data obtained by observing CAN tra c. CAN (or
Control Area Network) is a network for allowing micro-controllers inside of vehicles
to communicate with each other. We propose and demonstrate the e ectiveness of an
algorithm for detecting malicious tra c using an approach that discovers and exploits
the natural geometry of the CAN surface and its relationship to random walk Markov
chains. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
204 |
Fitting point process by different models.January 1993 (has links)
by Wing-yi, Tam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 74-77). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Cox and Lewis' Model and Weibull Process Model / Chapter Section 1 --- Nonhomogeneous Poisson Process (NHPP) --- p.5 / Chapter Section 2 --- Cox and Lewis' Model --- p.7 / Chapter Section 3 --- Weibull Process Model --- p.11 / Chapter Section 4 --- Test of NHPP --- p.14 / Chapter Chapter 3 --- Inference for Geometric Process with Inverse Gaussian Distribution / Chapter Section 1 --- Geometric Process (GP) --- p.18 / Chapter Section 2 --- Inverse Gaussian Distribution (IG) --- p.22 / Chapter Section 3 --- Simulation --- p.25 / Chapter Section 4 --- Conclusion --- p.33 / Chapter Chapter 4 --- Comparison Geometric Process Model and NHPP model in Fitting a Point Process / Chapter Section 1 --- Introduction --- p.34 / Chapter Section 2 --- Real Data Examples --- p.39 / Chapter Section 3 --- Conclusion --- p.45 / Tables and Graphs --- p.48 / Appendices --- p.71 / References --- p.74
|
205 |
Warm-start strategies in primal-dual interior point method for linear programming.January 2001 (has links)
Lee Sung Tak. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 100-101). / Abstracts in English and Chinese. / Chapter 1 --- Introduction and Synopsis --- p.1 / Chapter 2 --- Literature Review --- p.5 / Chapter 3 --- The primal-dual interior point algorithm and the self-dual embedding method: Revisit --- p.7 / Chapter 4 --- The Warm-Start Strategy (WSS) --- p.25 / Chapter 5 --- Experimental result and analysis I: Parametric programming case --- p.31 / Chapter 5.1 --- The Big-Mac Problem --- p.33 / Chapter 5.2 --- The randomly generated problem and the Netlib problem --- p.46 / Chapter 5.3 --- Chapter summary --- p.53 / Chapter 6 --- Experimental result and analysis II: Adding rows and columns --- p.54 / Chapter 6.1 --- The Big-Mac problem --- p.57 / Chapter 6.2 --- The randomly generated problem and the Netlib problem --- p.66 / Chapter 6.3 --- The ball constraint problem --- p.82 / Chapter 6.4 --- Chapter Summary --- p.94 / Chapter 7 --- Summary and conclusion --- p.96 / Bibliography --- p.100 / Chapter A --- Appendix --- p.102
|
206 |
Chemical state and luminescence imaging of natural and synthetic diamondJones, Geraint Owen January 2011 (has links)
This thesis presents work undertaken using Synchrotron and Laboratory based techniques in parallel on the Chemical State and Luminescence Imaging of Natural and Synthetic Diamond. X-ray absorption spectroscopy (XAS) techniques have revealed information on the chemical structure and bonding within brown and variegated type Ia, IIa, CVD and high-pressure, high-temperature (HPHT) treated diamonds. XAS, Raman, X-ray Excited Optical Luminescence (XEOL) and Photoluminescence (PL) are some of the techniques that have been applied to characterise and investigate the cause of the brown colouration. The XAS measurements have been undertaken in imaging mode with the capabilities of correlating the luminescence image with the brown regions in partial luminescence yield (PLY) and total luminescence yield (TLY). OD-XAS spectrums have been obtained at non-brown and brown regions and have revealed a higher concentration of sp2-bonded carbon present at the brown sites. Raman spectroscopy utilized in imaging mode also supports this discovery.
|
207 |
Exploring complex loss functions for point estimationChaisee, Kuntalee January 2015 (has links)
This thesis presents several aspects of simulation-based point estimation in the context of Bayesian decision theory. The first part of the thesis (Chapters 4 - 5) concerns the estimation-then-minimisation (ETM) method as an efficient computational approach to compute simulation-based Bayes estimates. We are interested in applying the ETM method to compute Bayes estimates under some non-standard loss functions. However, for some loss functions, the ETM method cannot be implemented straightforwardly. We examine the ETM method via Taylor approximations and cubic spline interpolations for Bayes estimates in one dimension. In two dimensions, we implement the ETM method via bicubic interpolation. The second part of the thesis (Chapter 6) concentrates on the analysis of a mixture posterior distribution with a known number of components using the Markov chain Monte Carlo (MCMC) output. We aim for Bayesian point estimation related to a label invariant loss function which allows us to estimate the parameters in the mixture posterior distribution without dealing with label switching. We also investigate uncertainty of the point estimates which is presented by the uncertainty bound and the crude uncertainty bound of the expected loss evaluated at the point estimates based on MCMC samples. The crude uncertainty bound is relatively cheap, but it seems to be unreliable. On the other hand, the uncertainty bound which is approximated a 95% confidence interval seems to be reliable, but are very computationally expensive. The third part of the thesis (Chapter 7), we propose a possible alternative way to present the uncertainty for Bayesian point estimates. We adopt the idea of leaving out observations from the jackknife method to compute jackknife-Bayes estimates. We then use the jackknife-Bayes estimates to visualise the uncertainty of Bayes estimates. Further investigation is required to improve the method and some suggestions are made to maximise the efficiency of this approach.
|
208 |
A Look Into Human Brain Activity with EEG DataSurface ReconstructionPothayath, Naveen 23 April 2018 (has links)
EEG has been used to explore the electrical activity of the brain for manydecades. During that time, different components of the EEG signal have been iso-lated, characterized, and associated with a variety of brain activities. However, nowidely accepted model characterizing the spatio-temporal structure of the full-brainEEG signal exists to date.Modeling the spatio-temporal nature of the EEG signal is a daunting task. Thespatial component of EEG is defined by the locations of recording electrodes (rang-ing between 2 to 256 in number) placed on the scalp, while its temporal componentis defined by the electrical potentials the electrodes detect. The EEG signal is gen-erated by the composite electrical activity of large neuron assemblies in the brain.These neuronal units often perform independent tasks, giving the EEG signal ahighly dynamic and non-linear character. These characteristics make the raw EEGsignal challenging to work with. Thus, most research focuses on extracting andisolating targeted spatial and temporal components of interest. While componentisolation strategies like independent component analysis are useful, their effective-ness is limited by noise contamination and poor reproducibility. These drawbacks tofeature extraction could be improved significantly if they were informed by a globalspatio-temporal model of EEG data.The aim of this thesis is to introduce a novel data-surface reconstruction (DSR)technique for EEG which can model the integrated spatio-temporal structure of EEGdata. To produce physically intuitive results, we utilize a hyper-coordinate transfor-mation which integrates both spatial and temporal information of the EEG signalinto a unified coordinate system. We then apply a non-uniform rational B spline(NURBS) fitting technique which minimizes the point distance from the computedsurface to each element of the transformed data. To validate the effectiveness of thisproposed method, we conduct an evaluation using a 5-state classification problem;with 1 baseline and 4 meditation states comparing the classification accuracy usingthe raw EEG data versus the surface reconstructed data in the broadband rangeand the alpha, beta, delta, gamma and higher gamma frequencies. Results demon-strate that the fitted data consistently outperforms the raw data in the broadbandspectrum and all frequency spectrums.
|
209 |
Many-Light Real-Time Global Illumination using Sparse Voxel OctreeSun, Che 18 December 2015 (has links)
"Global illumination (GI) rendering simulates the propagation of light through a 3D volume and its interaction with surfaces, dramatically increasing the fidelity of computer generated images. While off-line GI algorithms such as ray tracing and radiosity can generate physically accurate images, their rendering speeds are too slow for real-time applications. The many-light method is one of many novel emerging real-time global illumination algorithms. However, it requires many shadow maps to be generated for Virtual Point Light (VPL) visibility tests, which reduces its efficiency. Prior solutions restrict either the number or accuracy of shadow map updates, which may lower the accuracy of indirect illumination or prevent the rendering of fully dynamic scenes. In this thesis, we propose a hybrid real-time GI algorithm that utilizes an efficient Sparse Voxel Octree (SVO) ray marching algorithm for visibility tests instead of the shadow map generation step of the many-light algorithm. Our technique achieves high rendering fidelity at about 50 FPS, is highly scalable and can support thousands of VPLs generated on the fly. A survey of current real-time GI techniques as well as details of our implementation using OpenGL and Shader Model 5 are also presented."
|
210 |
Disposable cartridge based platform for real-time detection of single viruses in solutionScherr, Steven M 10 July 2017 (has links)
Label-free imaging of viruses and nanoparticles directly in complex solutions is important for virology, vaccine research, and rapid diagnostics. These fields would all benefit from tools that allow for more rapid and sensitive characterization of viruses. Traditionally, light microscopy has been used in laboratories for detection of parasites, fungi, and bacteria for both research and clinical diagnosis because it is portable and simple to use. However, virus particles typically cannot be explored using light microscopy without the use of secondary labels due to their small size and low contrast. Characterization and detection of virus particles therefore rely on more complex approaches such as electron microscopy, ELISA, or plaque assay. These approaches require a significant level of expertise, purification of the virus from its natural environment, and often offer indirect verification of the virus presence. A successful virus visualization technique should be rapid, sensitive, and inexpensive, while needing minimal sample preparation or user expertise. We have developed a disposable cartridge based platform for real-time, sensitive, and label free visualization of viruses and nanoparticles directly in complex solutions such as serum. To create this platform we combined an interference reflectance imaging technique (SP-IRIS) with a sealable microfluidic cartridge. Through empirical testing and numeric modelling, the cartridge parameters were optimized and a flow rate of ~3 µL/min was established as optimal. A complex 2-dimensional paper based capillary pump was incorporated into the polymer cartridge to achieve a constant flow rate. Using this platform we were able to reliably show virus detection in a 20 minute experiment. We demonstrate sensitivity comparable to laboratory-based assays such as ELISA and plaque assay, and equal or better sensitivity compared to paper based rapid diagnostic tests. These results display a platform technology that is capable of rapid multiplexed detection and visualization of viruses and nanoparticles directly in solution. This disposable cartridge based platform represents a new approach for sample-to-answer label-free detection and visualization of viruses and nanoparticles. This technology has the potential to enable rapid and high-throughput investigation of virus particle morphology, as well as be used as a rapid point-of-care diagnostic tool where imaging viruses directly in biological samples would be valuable.
|
Page generated in 0.0514 seconds