Spelling suggestions: "subject:"deconvolution"" "subject:"econvolution""
201 |
The effects of magmatic evolution, crystallinity, and microtexture on the visible/near-infrared and thermal-infrared spectra of volcanic rocksNoel A Scudder (16649295) 01 August 2023 (has links)
<p>The natural chemical and physical variations that occur within volcanic rocks (petrology) provide critical insights into mantle and crust conditions on terrestrial bodies. Visible/near-infrared (VNIR; 0.3-2.5 µm) and thermal infrared (TIR; 5-50 µm) spectroscopy are the main tools available to remotely characterize these materials from satellites in orbit. However, the accuracy of petrologic information that can be gained from spectra when rocks exhibit complex variations in mineralogy, crystallinity, microtexture, and oxidation state occurring together in natural settings is not well constrained. Here, we compare the spectra of a suite of volcanic planetary analog rocks from the Three Sisters, OR to their mineralogy, chemistry, and microtexture from X-ray diffraction, X-ray fluorescence, and electron microprobe analysis. Our results indicate that TIR spectroscopy is an effective petrologic tool in such rocks for modeling bulk mineralogy, crystallinity, and mineral chemistry. Given a library with appropriate glass endmembers, TIR modeling can derive glass abundance with similar accuracy as other major mineral groups and provide first-order estimates of glass wt.% SiO2 in glass-rich samples, but cannot effectively detect variations in microtexture and minor oxide minerals. In contrast, VNIR spectra often yield non-unique mineralogic interpretations due to overlapping absorption bands from olivine, glass, and Fe-bearing plagioclase. In addition, we find that sub-micron oxides hosted in transparent matrix material that are common in fine-grained extrusive rocks can lower albedo and partially to fully suppress mafic absorption bands, leading to very different VNIR spectra in rocks with the same mineralogy and chemistry. Mineralogical interpretations from VNIR spectra should not be treated as rigorous petrologic indicators, but can supplement TIR-based petrology by providing unique constraints on oxide minerals, microtexture, and alteration processes.</p>
|
202 |
Deciphering the Transcriptomic Heterogeneity of Duodenal Coeliac Disease BiopsiesWolf, Johannes, Willscher, Edith, Loeffler-Wirth, Henry, Schmidt, Maria, Flemming, Gunter, Zurek, Marlen, Uhlig, Holm H., Händel, Norman, Binder, Hans 26 January 2024 (has links)
Coeliac disease (CD) is a clinically heterogeneous autoimmune disease with variable presentation
and progression triggered by gluten intake. Molecular or genetic factors contribute to disease
heterogeneity, but the reasons for different outcomes are poorly understood. Transcriptome studies
of tissue biopsies from CD patients are scarce. Here, we present a high-resolution analysis of the
transcriptomes extracted from duodenal biopsies of 24 children and adolescents with active CD and
21 individuals without CD but with intestinal afflictions as controls. The transcriptomes of CD patients
divide into three groups—a mixed group presenting the control cases, and CD-low and CD-high
groups referring to lower and higher levels of CD severity. Persistence of symptoms was weakly
associated with subgroup, but the highest marsh stages were present in subgroup CD-high, together
with the highest cell cycle rates as an indicator of virtually complete villous atrophy. Considerable
variation in inflammation-level between subgroups was further deciphered into immune cell types
using cell type de-convolution. Self-organizing maps portrayal was applied to provide high-resolution
landscapes of the CD-transcriptome. We find asymmetric patterns of miRNA and long non-coding
RNA and discuss the effect of epigenetic regulation. Expression of genes involved in interferon
gamma signaling represent suitable markers to distinguish CD from non-CD cases. Multiple pathways
overlay in CD biopsies in different ways, giving rise to heterogeneous transcriptional patterns,
which potentially provide information about etiology and the course of the disease.
|
203 |
True Color Measurements Using Color Calibration TechniquesWransky, Michael E. 15 September 2015 (has links)
No description available.
|
204 |
Estimation of Neural Cell types in the Allen Human Brain Atlas using Murine-derived Expression ProfilesJohnson, Travis Steele 28 September 2016 (has links)
No description available.
|
205 |
Particle subgrid scale modeling in large-eddy simulation of particle-laden turbulenceCernick, Matthew J. 04 1900 (has links)
<p>This thesis is concerned with particle subgrid scale (SGS) modeling in large-eddy simulation (LES) of particle-laden turbulence. Although most particle-laden LES studies have neglected the effect of the subgrid scales on the particles, several particle SGS models have been proposed in the literature. In this research, the approximate deconvolution method (ADM), and the stochastic models of Fukagata et al. (2004), Shotorban and Mashayek (2006) and Berrouk et al. (2007) are analyzed. The particle SGS models are assessed by conducting both a priori and a posteriori tests of a periodic box of decaying, homogeneous and isotropic turbulence with an initial Reynolds number of Re=74. The model results are compared with particle statistics from a direct numerical simulation (DNS). Particles with a large range of Stokes numbers are tested using various filter sizes and stochastic model constant values. Simulations with and without gravity are performed to evaluate the ability of the models to account for the crossing trajectory and continuity effects. The results show that ADM improves results but is only capable of recovering a portion of the SGS turbulent kinetic energy. Conversely, the stochastic models are able to recover sufficient energy, but show a large range of results dependent on Stokes number and filter size. The stochastic models generally perform best at small Stokes numbers. Due to the random component, the stochastic models are unable to predict preferential concentration.</p> / Master of Applied Science (MASc)
|
206 |
Reduced-Order Modeling of Complex Engineering and Geophysical Flows: Analysis and ComputationsWang, Zhu 14 May 2012 (has links)
Reduced-order models are frequently used in the simulation of complex flows to overcome the high computational cost of direct numerical simulations, especially for three-dimensional nonlinear problems.
Proper orthogonal decomposition, as one of the most commonly used tools to generate reduced-order models, has been utilized in many engineering and scientific applications.
Its original promise of computationally efficient, yet accurate approximation of coherent structures in high Reynolds number turbulent flows, however, still remains to be fulfilled. To balance the low computational cost required by reduced-order modeling and the complexity of the targeted flows, appropriate closure modeling strategies need to be employed.
In this dissertation, we put forth two new closure models for the proper orthogonal decomposition reduced-order modeling of structurally dominated turbulent flows: the dynamic subgrid-scale model and the variational multiscale model.
These models, which are considered state-of-the-art in large eddy simulation, are carefully derived and numerically investigated.
Since modern closure models for turbulent flows generally have non-polynomial nonlinearities, their efficient numerical discretization within a proper orthogonal decomposition framework is challenging. This dissertation proposes a two-level method for an efficient and accurate numerical discretization of general nonlinear proper orthogonal decomposition closure models. This method computes the nonlinear terms of the reduced-order model on a coarse mesh. Compared with a brute force computational approach in which the nonlinear terms are evaluated on the fine mesh at each time step, the two-level method attains the same level of accuracy while dramatically reducing the computational cost. We numerically illustrate these improvements in the two-level method by using it in three settings: the one-dimensional Burgers equation with a small diffusion parameter, a two-dimensional flow past a cylinder at Reynolds number Re = 200, and a three-dimensional flow past a cylinder at Reynolds number Re = 1000.
With the help of the two-level algorithm, the new nonlinear proper orthogonal decomposition closure models (i.e., the dynamic subgrid-scale model and the variational multiscale model), together with the mixing length and the Smagorinsky closure models, are tested in the numerical simulation of a three-dimensional turbulent flow past a cylinder at Re = 1000. Five criteria are used to judge the performance of the proper orthogonal decomposition reduced-order models: the kinetic energy spectrum, the mean velocity, the Reynolds stresses, the root mean square values of the velocity fluctuations, and the time evolution of the proper orthogonal decomposition basis coefficients. All the numerical results are benchmarked against a direct numerical simulation. Based on these numerical results, we conclude that the dynamic subgrid-scale and the variational multiscale models are the most accurate.
We present a rigorous numerical analysis for the discretization of the new models. As a first step, we derive an error estimate for the time discretization of the Smagorinsky proper orthogonal decomposition reduced-order model for the Burgers equation with a small diffusion parameter.
The theoretical analysis is numerically verified by two tests on problems displaying shock-like phenomena.
We then present a thorough numerical analysis for the finite element discretization of the variational multiscale proper orthogonal decomposition reduced-order model for convection-dominated convection-diffusion-reaction equations. Numerical tests show the increased numerical accuracy over the standard reduced-order model and illustrate the theoretical convergence rates.
We also discuss the use of the new reduced-order models in realistic applications such as airflow simulation in energy efficient building design and control problems as well as numerical simulation of large-scale ocean motions in climate modeling. Several research directions that we plan to pursue in the future are outlined. / Ph. D.
|
207 |
Computational Advancements for Solving Large-scale Inverse ProblemsCho, Taewon 10 June 2021 (has links)
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification. / Doctor of Philosophy / For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.
|
208 |
Dissecting Trypanosome Metabolism by Discovering Glycolytic Inhibitors, Drug Targets, and Glycosomal pH RegulationCall, Daniel Hale 07 May 2024 (has links) (PDF)
Trypanosoma brucei, the causative agent of African trypanosomiasis, and its relatives Trypanosoma cruzi and several Leishmania species belong to a class of protozoa called kinetoplastids that cause a significant health burden in tropical and semitropical countries across the world. While an improved therapy was recently approved for African trypanosomiasis, the therapies available to treat infections caused by T. cruzi and Leishmania spp. remain relatively poor. Improving our understanding of T. brucei metabolism can inform on metabolism of its relatives. The purpose of the research presented in this dissertation was to develop novel tools and methods to study metabolism in T. brucei with the ultimate aim to improve treatments of all kinetoplastid diseases. We developed a novel tool to study glycosomal pH in the bloodstream form of T. brucei. Using this tool, we discovered that this life stage regulates glycosomal pH differently than the procyclic form, or insect-dwelling stage, and only uses sodium/proton transporters to regulate glycosomal pH. I pioneered a thermal proteome profiling method in this parasite to discover drug targets and their effects on cell pathways. Using this method, I found that other proteins may be involved in glycosomal pH regulation, including PEX11 and a vacuolar ATPase. This method also illuminated several important pathways influenced by glycosomal pH regulation, including glycosome proliferation, vesicle trafficking, protein glycosylation, and amino acid transport. Metabolic studies in kinetoplastid parasites are currently hampered by the lack of available chemical probes. We developed a novel flow cytometry-based high-throughput drug screening assay to discover chemical probes of T. brucei glycolysis. This method combines the advantages of phenotypic (or cell-based) screens with the advantage of targeted (purified protein) screens by multiplexing biosensors that measure multiple glycolytic metabolites simultaneously, such as glucose, ATP, and glycosomal pH. The complementary information gained is then used to distinguish the part of glycolysis identified inhibitors target. We validated the method using the well characterized glycolytic and alternative oxidase inhibitors 2-deoxyglucose and salicylhydroxamic acid respectively. We demonstrated the screening assay with a pilot screen of 14,976 compounds with decent hit rates for each sensor (0.2-0.4%). About 64% of rescreened hits repeated activity in at least one sensor. We demonstrated one compound with micromolar activity against two biosensors. In summary, we developed and demonstrated a novel screening method that can discover glycolytic chemical probes to better study metabolism in this and related parasites. There are few methods to study enzyme kinetics in the live-cell environment. I developed a kinetic flow cytometry assay that can measure enzyme and transporter activity using fluorescent biosensors. I demonstrated this by measuring glucose transport kinetics and alternative oxidase inhibition kinetics, with the measured kinetic parameters similar to those previously reported. We plan to expand on this method to measure transport kinetics in the glycosome and other organelles which has not been done before. We previously performed a drug screen to identify inhibitors that decrease intracellular glucose in T. brucei. I have performed preliminary work identifying the glucose transporter THT1 as one of the targets of optimized glucose inhibitors using the previously mentioned thermal proteome profiling method. We expect this finding will improve our ability to move these compounds from hit to lead in the drug discovery pipeline. Together, I have developed several flow cytometry and proteomics methods to better study metabolism in T. brucei. These tools are beginning to be used in related parasites. We expect the discoveries made using these tools will improve our ability to treat these neglected tropical diseases.
|
209 |
Large Eddy Simulation Reduced Order ModelsXie, Xuping 12 May 2017 (has links)
This dissertation uses spatial filtering to develop a large eddy simulation reduced order model (LES-ROM) framework for fluid flows. Proper orthogonal decomposition is utilized to extract the dominant spatial structures of the system. Within the general LES-ROM framework, two approaches are proposed to address the celebrated ROM closure problem. No phenomenological arguments (e.g., of eddy viscosity type) are used to develop these new ROM closure models.
The first novel model is the approximate deconvolution ROM (AD-ROM), which uses methods from image processing and inverse problems to solve the ROM closure problem. The AD-ROM is investigated in the numerical simulation of a 3D flow past a circular cylinder at a Reynolds number $Re=1000$. The AD-ROM generates accurate results without any numerical dissipation mechanism. It also decreases the CPU time of the standard ROM by orders of magnitude.
The second new model is the calibrated-filtered ROM (CF-ROM), which is a data-driven ROM. The available full order model results are used offline in an optimization problem to calibrate the ROM subfilter-scale stress tensor. The resulting CF-ROM is tested numerically in the simulation of the 1D Burgers equation with a small diffusion parameter. The numerical results show that the CF-ROM is more efficient than and as accurate as state-of-the-art ROM closure models. / Ph. D. / Numerical simulation of complex fluid flows is often challenging in many realistic engineering, scientific, and medical applications. Indeed, an accurate numerical approximation of such flows generally requires millions and even billions of degrees of freedom. Furthermore, some design and control applications involve repeated numerical simulations for different parameter values. Reduced order models (ROMs) are an efficient approach to the numerical simulation of fluid flows, since they can reduce the computational time of a brute force computational approach by orders of magnitude while preserving key features of the flow.
Our main contribution to the field is the use of spatial filtering to develop better ROMs. To construct the new spatially filtered ROMs, we use ideas from image processing and inverse problems, as well as data-driven algorithms. The new ROMs are more accurate than standard ROMs in the numerical simulation of challenging three-dimensional flows past a circular cylinder.
|
210 |
Holographic imaging of cold atomsTurner, Lincoln David Unknown Date (has links) (PDF)
This thesis presents a new optical imaging technique which measures the structure of objects without the use of lenses. Termed diffraction-contrast imaging (DCI), the method retrieves the object structure from a Fresnel diffraction pattern of the object, using a deconvolution algorithm. DCI is particularly adept at imaging highly transparent objects and this is demonstrated by retrieving the structure of an almost transparent cloud of laser-cooled atoms. Applied to transparent Bose-Einstein condensates, DCI should allow the non-destructive imaging of the condensate while requiring only the minimum possible apparatus of a light source and a detector. (For complete abstract open document)
|
Page generated in 0.0892 seconds