Spelling suggestions: "subject:"correction"" "subject:"eorrection""
441 |
Adaptive optics stimulated emission depletion microscope for thick sample imagingZdankowski, Piotr January 2018 (has links)
Over the past few decades, fluorescence microscopy has proven to become the most widely used imaging technique in the field of life sciences. Unfortunately, all classical optical microscopy techniques have one thing in common: their resolution is limited by the diffraction. Thankfully, due to the very strong interest, development of fluorescent microscopy techniques is very intense, with novel solutions surfacing repeatedly. The major breakthrough came with the appearance of super-resolution microscopy techniques, enabling imaging well below the diffraction barrier and opening the new era of nanoscopy. Among the fluorescent super-resolution techniques, Stimulated Emission Depletion (STED) microscopy has been particularly interesting, as it is a purely optical technique which does not require post image processing. STED microscopy has proven to resolve structures down to the molecular resolution. However, super-resolution microscopy is not a cure to all the problems and it also has its limits. What has shown to be particularly challenging, was the super-resolution imaging of thick samples. With increased thickness of biological structures, the aberrations increase and signal-to-noise (SNR) decreases. This becomes even more evident in the super-resolution imaging, as the nanoscopic techniques are especially sensitive to aberrations and low SNR. The aim of this work is to propose and develop a 3D STED microscope that can successfully image thick biological samples with nanoscopic resolution. In order to achieve that, adaptive optics (AO) has been employed for correcting the aberrations, using the indirect wavefront sensing approach. This thesis presents a custom built 3D STED microscope with the AO correction and the resulting images of thick samples with resolution beyond diffraction barrier. The developed STED microscope achieved the resolution of 60nm in lateral and 160nm in axial direction. What is more, it enabled super-resolution imaging of thick, aberrating samples. HeLa, RPE-1 cells and dopaminergic neuron differentiated from human IPS cells were imaged using the microscope. The results shown in this thesis present 3D STED imaging of thick biological samples and, what is particularly worth to highlight, 3D STED imaging at the 80μm depth, where the excitation and depletion beams have to propagate through the thick layer of tissue. 3D STED images at such depth has not been reported up to date.
|
442 |
Behandlungsverlauf von Kindern mit intraspinalen Tumoren, Wirbelsäulendeformitäten und vertical expandable prosthetic titanium rib (VEPTR) Implantaten / Surgical Treatment of Spinal Deformities in Young Paraplegic Children with Intraspinal Tumors and vertical expandable prosthetic titanium rib (VEPTR)Schiele, Steffen 21 March 2019 (has links)
No description available.
|
443 |
Statistical analysis methods for time varying nanoscale imaging problemsLaitenberger, Oskar 29 June 2018 (has links)
No description available.
|
444 |
Teoria de correção de erros quânticos durante operações lógicas e medidas de diagnóstico de duração finita / Quantum error-correction theory during logical gates and finitetime syndrome measurementsCastro, Leonardo Andreta de 17 February 2012 (has links)
Neste trabalho, estudamos a teoria quântica de correção de erros, um dos principais métodos de prevenção de perda de informação num computador quântico. Este método, porém, normalmente é estudado considerando-se condições ideais em que a atuação das portas lógicas que constituem o algoritmo quântico não interfere com o tipo de erro que o sistema sofre. Além disso, as medidas de síndrome empregadas no método tradicional são consideradas instantâneas. Nossos objetivos neste trabalho serão avaliar como a alteração dessas duas suposições modificaria o processo de correção de erros. Com relação ao primeiro objetivo, verificamos que, para erros causados por ambientes externos, a atuação de uma porta lógica simultânea ao ruído pode gerar erros que, a princípio, podem não ser corrigíveis pelo código empregado. Propomos em seguida um método de correção a pequenos passos que pode ser usado para tornar desprezíveis os erros incorrigíveis, além de poder ser usado para reduzir a probabilidade de erros corrigíveis. Para o segundo objetivo, estudamos primeiro como medidas de tempo finito afetam a descoerência de apenas um qubit, concluindo que esse tipo de medida pode na verdade proteger o estado que está sendo medido. Motivados por isso, mostramos que, em certos casos, medidas de síndrome finitas realizadas conjuntamente ao ruído são capazes de proteger o estado dos qubits contra os erros mais eficientemente do que se as medidas fossem realizadas instantaneamente ao fim do processo. / In this work, we study the theory of quantum error correction, one of the main methods of preventing loss of information in a quantum computer. This method, however, is normally studied under ideal conditions in which the operation of the quantum gates that constitute the quantum algorithm do not interefere with the kind of error the system undergoes. Moreover, the syndrome measurements employed in the traditional method are considered instantaneous. Our aims with this work are to evaluate how altering these two suppositions would modify the quantum error correction process. In respect with the first objective, we verify that, for errors caused by external environments, the action of a logical gate simultaneously to the noise can provoke errors that, in principle, may not be correctable by the code employed. We subsequently propose a short-step correction method that can be used to render negligible the uncorrectable errors, besides being capable of reducing the probability of occurrence of correctable errors. For the second objective, we first study how finite-time measurements affect the decoherence of a single qubit, concluding that this kind of measurement can actually protect the state under scrutiny. Motivated by that, we demonstrate, that, in certain cases, finite syndrome measurements performed concurrently with the noise are capable of protecting more efficiently the state of the qubits against errors than if the measurements had been performed instantaneously at the the end of the process.
|
445 |
Repasse cambial no Brasil: uma investigação a nível agregado a partir de um SVEC / Exchange-Rate pass-through in Brazil: a SVEC investigationGodoi, Lucas Gonçalves 14 June 2018 (has links)
O impacto de movimentos cambiais nos níveis de preços é de suma importância para a formulação de políticas econômicas. Nesse contexto, este trabalho tem como objetivo a utilização de uma nova metodologia para a estimação e cálculo do repasse para diferentes índices de preço no período de 2003-2017. Estudos anteriores nesse campo identificam ignoram as relações de longo-prazo presentes no sistema ou não utilizam as restrições dadas pela estrutura de cointegração do sistema. Assim a identificação dos choques estruturais é discutida a partir da premissa de separação entre choques permanentes e estruturais sendo que a mesma é fundamentada pela teoria com o auxílio de testes estatísticos. Além dessa estrutura não-recursiva, uma alternativa é apresentada a partir de estruturas recursivas de Cholesky de forma a tornar possível a comparação. Três distintas especificações são estimadas de maneira a gerar estimativas para o repasse aos preços de importação, no atacado e ao consumidor para o Brasil. Para a estrutura não recursiva os repasses para os preços de importação variam de 48 a 65% a depender da especificação sendo diferentes de completo no longo-prazo. Para os preços no atacado os repasses variam de 11 a 15% se mostrando em duas das três especificações estatisticamente diferentes de zero. Os repasses ao consumidor variam de 4 a 13% se mostrando estatisticamente diferente de zero em duas das três especificações. / The impact of exchange rate movements on price levels is of utmost importance for the formulation of economic policies. In this context, this paper aims to use a new methodology for the estimation and calculation of the pass-through for different price index in the period 2003-2017. Previous studies in this field identify ignore the long-term relationships present in the system or do not use the constraints given by the system cointegration structure. Thus, the identification of structural shocks is discussed from the premise of separation between permanent and structural shocks, and it is based on theory with the aid of statistical tests. In addition to this non-recursive structure, one is estimated from Cholesky\'s recursive structures in order to make the comparison possible. Three different specifications are estimated in order to generate estimates for the transfer of import, wholesale and consumer prices to Brazil. For the non-recursive structure, pass-through for import prices range from 48 to 65 % depending on the specification being different from complete in the long run. For producer prices, pass-through range from 11 to 15 % and in two of three specifications they are statistically different from zero. Pass-through to the consumer prices ranges from 4 to 13 % and it is statistically different from zero in two of the three specifications.
|
446 |
Structured low rank approaches for exponential recovery - application to MRIBalachandrasekaran, Arvind 01 December 2018 (has links)
Recovering a linear combination of exponential signals characterized by parameters is highly significant in many MR imaging applications such as parameter mapping and spectroscopy. The parameters carry useful clinical information and can act as biomarkers for various cardiovascular and neurological disorders. However, their accurate estimation requires a large number of high spatial resolution images, resulting in long scan time. One of the ways to reduce scan time is by acquiring undersampled measurements. The recovery of images is usually posed as an optimization problem, which is regularized by functions enforcing sparsity, smoothness or low rank structure. Recently structured matrix priors have gained prominence in many MRI applications because of their superior performance over the aforementioned conventional priors. However, none of them are designed to exploit the smooth exponential structure of the 3D dataset.
In this thesis, we exploit the exponential structure of the signal at every pixel location and the spatial smoothness of the parameters to derive a 3D annihilation relation in the Fourier domain. This relation translates into a product of a Hankel/Toeplitz structured matrix, formed from the k-t samples, and a vector of filter coefficients. We show that this matrix has a low rank structure, which is exploited to recover the images from undersampled measurements. We demonstrate the proposed method on the problem of MR parameter mapping. We compare the algorithm with the state-of-the-art methods and observe that the proposed reconstructions and parameter maps have fewer artifacts and errors.
We extend the structured low rank framework to correct field inhomogeneity artifacts in MR images. We introduce novel approaches for field map compensation for data acquired using Cartesian and non-Cartesian trajectories. We adopt the time segmentation approach and reformulate the artifact correction problem into a recovery of time series of images from undersampled measurements. Upon recovery, the first image of the series will correspond to the distortion-free image. With the above re-formulation, we can assume that the signal at every pixel follows an exponential signal characterized by field map and the damping constant R2*. We exploit the smooth exponential structure of the 3D dataset to derive a low rank structured matrix prior, similar to the parameter mapping case. We demonstrate the algorithm on spherical MR phantom and human data and show that the artifacts are greatly reduced compared to the uncorrected images.
Finally, we develop a structured matrix recovery framework to accelerate cardiac breath-held MRI. We model the cardiac image data as a 3D piecewise constant function. We assume that the zeros of a 3D trigonometric polynomial coincides with the edges of the image data, resulting in a Fourier domain annihilation relation. This relation can be compactly expressed in terms of a structured low rank matrix. We exploit this low rank property to recover the cardiac images from undersampled measurements. We demonstrate the superiority of the proposed technique over conventional sparsity and smoothness based methods. Though the model assumed here is not exponential, yet the proposed algorithm is closely related to that developed for parameter mapping.
The direct implementation of the algorithms has a high memory demand and computational complexity due to the formation and storage of a large multi-fold Toeplitz matrix. Till date, the practical utility of such algorithms on high dimensional datasets has been limited due to the aforementioned reasons. We address these issues by introducing novel Fourier domain approximations which result in a fast and memory efficient algorithm for the above-mentioned applications. Such approximations allow us to work with large datasets efficiently and eliminate the need to store the Toeplitz matrix. We note that the algorithm developed for exponential recovery is general enough to be applied to other applications beyond MRI.
|
447 |
The effects of error correction with and without reinforcement on skill acquisition and preferences of children with autism spectrum disorderYuan, Chengan 01 August 2018 (has links)
Children with autism spectrum disorder (ASD) often require early intensive behavioral interventions (EIBI) to improve their skills in a variety of domains. Error correction is a common instructional component in EIBI programs because children with ASD tend to make persistent errors. Ineffective error correction can result in a lack of learning or undesirable behavior. To date, research has not systematically investigated the use of reinforcement during error correction for children with ASD.
This study compared the effects of correcting errors with and without reinforcement and their impact on preferences of young children with autism spectrum disorder (ASD). Four boys with ASD between 3 to 7 years old in China participated in this study. In the context of a repeated-acquisition design, each participant completed three sets of matching-to-sample task under the two error-correction procedures. During the error correction with reinforcement condition, the participants received the reinforcers after correct responses prompted by the researcher following errors. During the without-reinforcement condition, the participants did not receive any reinforcers after prompted responses. The number of sessions required to reach mastery criterion under the two conditions varied among the participants. Visual analysis did not confirm a functional relation between the error-correction procedures and the sessions required to reach mastery. With regard to children’s preferences, three children preferred the with-reinforcement condition and one preferred the without-reinforcement condition. The findings had conceptual implications and suggested practical implications relating to treatment preference.
|
448 |
Performance evaluation of a network of polarimetric X-Band radars used for rainfall estimationDomaszczynski, Piotr 01 July 2012 (has links)
Networks of small, often mobile, polarimetric radars are gaining popularity in the hydrometeorology community due to their rainfall observing capabilities and relative low purchase cost. In recent years, a number of installations have become operational around the globe. The problem of signal attenuation by intervening rainfall has been recognized as the major source of error in rainfall estimation by short-wavelength (C-, X, K-band) radars. The simultaneous observation of precipitation by multiple radars creates new prospects for better and more robust attenuation correction algorithms and, consequently, yields more accurate rainfall estimation.
The University of Iowa hydrometeorology group's acquisition of a network of four mobile, polarimetric, X-band radars has resulted in the need for a thoughtful evaluation of the instrument. In this work, we use computer simulations and the data collected by The University of Iowa Polarimetric Radar Network to study the performance of attenuation correction methods in single-radar and network-based arrangements.
To support the computer simulations, we developed a comprehensive polarimetric radar network simulator, which replicates the essential aspects of the radar network rainfall observing process. The simulations are based on a series of physics- and stochastic-based simulated rainfall events occurring over the area of interest. The characteristics of the simulated radars are those of The University of Iowa Polarimetric Radar Network. We assess the correction methods by analyzing the errors in reflectivity and rainfall rate over the area of interest covered by the network's radars. To enable the implementation of the attenuation correction methods to the data collected by The University of Iowa Polarimetric Radar Network, we first developed a set of utilities to assist with efficient data collection and analysis. Next, we conducted a series of calibration tests to evaluate the relative calibration and channel balance of the 2 network's radars. Finally, in an attempt to verify the results obtained via computer simulations, we applied the set of attenuation correction algorithms to the data collected by The University of Iowa Polarimetric Radar Network.
|
449 |
Modeling and Projection of the North American Monsoon Using a High-Resolution Regional Climate ModelMeyer, Jonathan D.D. 01 May 2017 (has links)
This dissertation aims to better understand how various climate modeling approaches affect the fidelity of the North American Monsoon (NAM), as well as the sensitivity of the future state of the NAM under a global warming scenario. Here, we improved over current fully-coupled general circulation models (GCM), which struggle to fully resolve the controlling dynamics responsible for the development and maintenance of the NAM. To accomplish this, we dynamically downscaled a GCM with a regional climate model (RCM). The advantage here being a higher model resolution that improves the representation of processes on scales beyond that which GCMs can resolve. However, as all RCM applications are subject to the transference of biases inherent to the parent GCM, this study developed and evaluated a process to reduce these biases. Pertaining to both precipitation and the various controlling dynamics of the NAM, we found simulations driven by these bias-corrected forcing conditions performed moderately better across a 32-year historical climatology than simulations driven by the original GCM data.
Current GCM consensus suggests future tropospheric warming associated with increased radiative forcing as greenhouse gas concentrations increase will suppress the NAM convective environment through greater atmospheric stability. This mechanism yields later onset dates and a generally drier season, but a slight increase to the intensity during July-August. After comparing downscaled simulations forced with original and corrected forcing conditions, we argue that the role of unresolved GCM surface features such as changes to the Gulf of California evaporation lead to a more convective environment. Even when downscaling the original GCM data with known biases, the inclusion of these surface features altered and in some cases reversed GCM trends throughout the southwest United States. This reversal towards a wetter NAM is further magnified in future bias-corrected simulations, which suggest (1) fewer average number of dry days by the end of the 21st century (2) onset occurring up to two to three weeks earlier than the historical average, and (3) more extreme daily precipitation values. However, consistent across each GCM and RCM model is the increase in inter-annual variability, suggesting greater susceptibility to drought conditions in the future.
|
450 |
Broad-Band Space Conservative On Wafer Network Analyzer Calibrations With More Complex SOLT DefinitionsPadmanabhan, Sathya 29 March 2004 (has links)
An improved Short-Open-Load-Thru (SOLT) on-wafer vector network calibration method for broad-band accuracy is proposed. Accurate measurement of on-wafer devices over a wide range of frequency, from DC to high frequencies with a minimum number of space conservative standards has always been desirable. Therefore, the work is aimed at improving the existing calibration methods and suggesting a best "practice" strategy that could be adopted to obtain greater accuracy with a simplified procedure and calibration set.
Quantitative and qualitative comparisons are made to the existing calibration techniques. The advantages and drawbacks of each calibration are analyzed. Prior work done at the University of South Florida by an improved SOLT calibration is briefed. The presented work is a culmination and refinement of the prior USF work that suggested that SOLT calibration improves with more complex definitions for the calibration standards.
Modeling of the load and thru standards is shown to improve accuracy as the frequency variation of the two standards can be significant. The load is modeled with modified equivalent circuit to include the high frequency parasitics. The model is physically verified on different substrates. The relation of load impedance with DC resistance is verified and its significance in SOLT calibrations is illustrated. The thru equation accounts for the losses in a transmission line reflections and phase shift including dielectric and conductor losses. The equations used are important for cases where a non-zero length of thru is assumed for the calibration.
The complex definitions of the calibration standards are included in the calibration algorithm with LabView and tested on two different VNA's -- Wiltron 360B and Anritsu Lightning. The importance of including the forward and reverse switch terms error correction in the algorithm is analyzed and measurements that verify the improvement are shown. The concept using same foot size calibration standards to simplify the calibration process is highlighted with results to verify the same.
The proposed technique thus provides for calibration strategy that can overcome the low frequency problems of TRL, retain TRL accuracy at high frequencies while enabling the use of a compact common footprint calibration set.
|
Page generated in 0.1314 seconds