• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 62
  • 15
  • 10
  • 10
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Investigation of surface inhomogeneity and estimation of the GOES skin temperature assimilation errors of the MM5 implied by the inhomogeneity over Houston metropolitan area

Han, Sang-Ok 01 November 2005 (has links)
This study developed a parameterization method to investigate the impacts of inhomogeneous land surfaces on mesoscale model simulations using a high-resolution 1-d PBL model. Then, the 1-d PBL model was used to investigate the inhomogeneity-caused model errors in applying the GOES satellite skin temperature assimilation technique into the MM5 over the Houston metropolitan area (HOU). In order to investigate the surface inhomogeneity impacts on the surface fluxes and PBL variables over HOU, homo- and inhomogeneous 1-d PBL model simulations were performed over HOU and compared to each other. The 1-d PBL model was constructed so that the surface inhomogeneities were able to be represented within model grid elements using a methodology similar to Avissar and Pielke (1989). The surface inhomogeneities over HOU were defined using 30-m resolution land cover data produced by Global Environment Management (GEM), Inc. The inhomogeneity parameterization method developed in the 1-d model was applied to a standard MM5 simulation to test the applicability of the parameterization to 3-d mesoscale model simulations. From the 1-d simulations it was inferred that the surface inhomogeneities would enhance the sensible heat flux by about 36 % and reduce the latent heat flux by about 25 %, thereby inducing the warmer (0.7 %) and drier (-1.0 %) PBL and the colder and moister PBL top induced by greater turbulent diffusivities. The 3-d application of the inhomogeneity parameterization indicated consistent results with the 1-d in general, with additional effects of advection and differential local circulation. The original GOES simulation was warmer compared to observations over HOU than over surrounding areas. The satellite data assimilation itself would lead to a warm bias due to erroneous estimation of gridpoint-mean skin temperature by the satellite, but 1-d simulations indicate that the impact of this error should be much weaker than what was observed. It seems that, unless the already existing warm and dry bias of the MM5 is corrected, the inhomogeneity parameterization in the MM5 would adversely affect the MM5 performance. Therefore, consideration of the surface inhomogeneities in the urban area needs to be confined to the GOES skin temperature retrieval errors at the moment.
2

Investigation of surface inhomogeneity and estimation of the GOES skin temperature assimilation errors of the MM5 implied by the inhomogeneity over Houston metropolitan area

Han, Sang-Ok 01 November 2005 (has links)
This study developed a parameterization method to investigate the impacts of inhomogeneous land surfaces on mesoscale model simulations using a high-resolution 1-d PBL model. Then, the 1-d PBL model was used to investigate the inhomogeneity-caused model errors in applying the GOES satellite skin temperature assimilation technique into the MM5 over the Houston metropolitan area (HOU). In order to investigate the surface inhomogeneity impacts on the surface fluxes and PBL variables over HOU, homo- and inhomogeneous 1-d PBL model simulations were performed over HOU and compared to each other. The 1-d PBL model was constructed so that the surface inhomogeneities were able to be represented within model grid elements using a methodology similar to Avissar and Pielke (1989). The surface inhomogeneities over HOU were defined using 30-m resolution land cover data produced by Global Environment Management (GEM), Inc. The inhomogeneity parameterization method developed in the 1-d model was applied to a standard MM5 simulation to test the applicability of the parameterization to 3-d mesoscale model simulations. From the 1-d simulations it was inferred that the surface inhomogeneities would enhance the sensible heat flux by about 36 % and reduce the latent heat flux by about 25 %, thereby inducing the warmer (0.7 %) and drier (-1.0 %) PBL and the colder and moister PBL top induced by greater turbulent diffusivities. The 3-d application of the inhomogeneity parameterization indicated consistent results with the 1-d in general, with additional effects of advection and differential local circulation. The original GOES simulation was warmer compared to observations over HOU than over surrounding areas. The satellite data assimilation itself would lead to a warm bias due to erroneous estimation of gridpoint-mean skin temperature by the satellite, but 1-d simulations indicate that the impact of this error should be much weaker than what was observed. It seems that, unless the already existing warm and dry bias of the MM5 is corrected, the inhomogeneity parameterization in the MM5 would adversely affect the MM5 performance. Therefore, consideration of the surface inhomogeneities in the urban area needs to be confined to the GOES skin temperature retrieval errors at the moment.
3

Observations and inhomogeneity in cosmology

Smale, Peter Rich January 2012 (has links)
We interpret distance measurements from nearby galaxies, type Ia supernovae, and gamma-ray bursts in the light of a cosmological model that incorporates a spatial averaging technique to account for the inhomogeneous distribution of structure in the late-epoch Universe and the consequent importance of the location of the observer. In the timescape cosmology it is suggested that dark energy is a misidentification of gravitational energy gradients---and consequently of the relative calibration of clocks and rulers---in a complex inhomogeneous structure. This model is consistent with the current supernova and gamma-ray burst data within the limits imposed by our understanding of the systematic uncertainties, to the extent that a Bayesian model comparison with the standard model yields a preference for the timescape model that is “not worth more than a bare mention”. In the spirit of the timescape model, of attempting to understand the astrophysics with as few cosmological assumptions as possible, we perform a model-independent analysis of galaxy distances in the local Universe. We find that the rest frame of the Local Group provides a more uniform Hubble expansion field than the rest frame of the CMB. We find that the dipole in the Hubble expansion field coincides with the dipole in the CMB temperature with a correlation coefficient of -0.92, and that this pattern is induced within 60 h⁻¹ Mpc, provided the variation in the distance-redshift relation due to the formation of structure is taken into account.
4

Inhomogeneous Totally Asymmetric Simple Exclusion Processes: Simulations, Theory and Application to Protein Synthesis

Dong, Jiajia 05 May 2008 (has links)
In the process of translation, ribosomes, a type of macromolecules, read the genetic code on a messenger RNA template (mRNA) and assemble amino acids into a polypeptide chain which folds into a functioning protein product. The ribosomes perform discrete directed motion that is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. We incorporate the essential components of the translation process: Ribosomes, cognate tRNA concentrations, and mRNA templates correspond to particles (covering ell > 1 sites), hopping rates, and the underlying lattice, respectively. As the hopping rates in an mRNA are given by its sequence (in the unit of codons), we are especially interested in the effects of a finite number of slow codons to the overall stationary current. To study this matter systematically, we first explore the effects of local inhomogeneities, i.e., one or two slow sites of hopping rate q<1 in TASEP for particles of size ell > 1(in the unit of lattice site) using Monte Carlo simulation. We compare the results of ell =1 and ell >1 and notice that the existence of local defects has qualitatively similar effects to the steady state. We focus on the stationary current as well as the density profiles. If there is only a single slow site in the system, we observe a significant dependence of the current on the location of the slow site for both ell =1 and ell >1 cases. In particular, we notice a novel "edge" effect, i.e., the interaction of a single slow codon with the system boundary. When two slow sites are introduced, more intriguing phenomena such as dramatic decreases in the current when the two are close together emerge. We analyze the simulation results using several different levels of mean-field theory. A finite-segment mean-field approximation is especially successful in understanding the "edge effect." If we consider the systems with finite defects as "contrived mRNA's", the real mRNA's are of more biological significance. Inspired by the previous results, we study several mRNA sequences from Escherichia coli. We argue that an effective translation rate including the context of each codon needs to be taken into consideration when seeking an efficient strategy to optimize the protein production. / Ph. D.
5

Numerical Investigation of Light Scattering by Atmospheric Particles

Liu, Chao 16 December 2013 (has links)
Atmospheric particles, i.e. ice crystals, dust particles, and black carbon, show significant complexities like irregular geometries, inhomogeneity, small-scale surface structures, and play a significant role in the atmosphere by scattering and absorbing the incident solar radiation and terrestrial thermal emission. Knowledge of aerosol scattering properties is a fundamental but challenging aspect of radiative transfer studies and remote sensing applications. This dissertation tries to improve our understanding on the scattering properties of atmospheric particles by investigating both the scattering algorithms and the representation of the realistic particles. One part of this dissertation discusses in details the pseudo-spectral time domain algorithm (PSTD) for calculating scattering properties, its advantages and the elimination of the Gibbs phenomenon. The applicability of the parallelized PSTD implementation is investigated for both spherical and nonspherical particles over a wide range of sizes and refractive indices, and the PSTD is applied for spherical particles with size parameters up to 200, and randomly oriented non-spherical ones with size parameters up to 100. The relative strengths of the PSTD are also shown by a systematic comparison with the discrete dipole approximation (DDA). The PSTD outperforms the DDA for particles with refractive indices larger than 1.4, and ones with smaller refractive indices by large sizes (e.g. size parameters larger than 60 for a refractive index of 1.2). The results suggest significant potential of the PSTD for the numerical investigation of the light scattering and corresponding atmospheric applications. The other part of this dissertation investigates the effects of particle complexities on the light scattering properties of the atmospheric particles, and three aspects corresponding to the irregular geometry, inhomogeneity and surface roughness are studied. To cover the entire particle size range from the Rayleigh to the geometric- optics regimes, the PSTD (for relatively small particles) is combined with the im- proved geometric-optics method (IGOM) that is only applicable for large particles. The Koch-fractal geometry is introduced to model the light scattering properties of aerosol, and performs an excellent job of reproducing the experimental measurements of various mineral dust particles. For the inhomogeneous particles, the applicability of the effective medium approximations (EMA) is tested, and the EMA can be used to approximate the scattering properties of inhomogeneous particles only when the particles are uniformly internal mixtures. Furthermore, an irregular rough model is developed to study the effects of the small-scale surface roughness on the light scattering properties. In conclusion, the dissertation finds that the complexities of atmospheric particles have to be fully considered to obtain their scattering properties accurately.
6

Mechanical Studies on the Porcine Aortic Valve Part I: Geometrical Asymmetry, Material Inhomogeneity and Anisotropy in the Porcine Aortic Valve

Chong, Ming 12 1900 (has links)
<p> Various areas of studies on the natural and the prosthetic aortic valves are reviewed. </p> <p> A microtensile technique devised to investigate the inhomogeneous and anisotropic material properties of a porcine aortic valve's leaflets is described. Also, the theory and apparatus of a new stereophotogrammetric technique to define points in space by their Cartesian coordinates is introduced. The technique is used to investigate the local surface strains and curvatures of a porcine aortic valve's leaflets from 0 to 120 mm. Hg. in-vitro. </p> <p> It is found that the valve leaflets display marked inhomogeneity and anisotropy (orthotropy is assumed) in the elastic moduli and transition strains. For the non-coronary leaflet, the radial post-transition moduli vary from 42 to 215 gm/mm² with a mean of 111 gm/mm² (s.d. = 43 gm/mm²); and the radial transition strains vary from 30% to 70% with a mean of 58% (s.d. = 7%). Areas nearer the leaflet's coaptation edge tend to exhibit lower radial transition strains than the annulus edge. The central region of the leaflet is found to be the stiffest. For the same non-coronary leaflet, the circumferential post-transition moduli vary from 220 to 590 gm/mm² with a mean of 342 gm/mm² (s.d. = 118 gm/mm²); and the circumferential transition strains vary from 22% to 47% with a mean of 33% (s.d. = 3%). </p> <p> Inhomogeneity between leaflets is also observed; preliminary results seem to suggest that the non-coronary leaflet is the stiffest in the radial direction and the least stiff in the circumferential direction. In comparison, the right coronary leaflet exhibits the largest radial transition strains (~80% ) and the smallest circumferential transition strains (~25%). </p> <p> For the diastolic valve in-vitro, the circumferential strains are less than 10% at all pressures; therefore , this suggests pre-transition behaviour during diastole which is contrary to the general belief. Radial strains at diastole vary from 10% to well over 100% and show a definite tendency to increase from the sinus-annulus edge to the coaptation edge. The non-coronary leaflet is the least strained of the leaflets (10% to 60% at diastole). </p> <p> The determination of pre-or post-transition state at diastole is discussed and the implications of the results on stress analyses and trileaflet valve designs are noted. </p> / Thesis / Master of Engineering (ME)
7

Structured low rank approaches for exponential recovery - application to MRI

Balachandrasekaran, Arvind 01 December 2018 (has links)
Recovering a linear combination of exponential signals characterized by parameters is highly significant in many MR imaging applications such as parameter mapping and spectroscopy. The parameters carry useful clinical information and can act as biomarkers for various cardiovascular and neurological disorders. However, their accurate estimation requires a large number of high spatial resolution images, resulting in long scan time. One of the ways to reduce scan time is by acquiring undersampled measurements. The recovery of images is usually posed as an optimization problem, which is regularized by functions enforcing sparsity, smoothness or low rank structure. Recently structured matrix priors have gained prominence in many MRI applications because of their superior performance over the aforementioned conventional priors. However, none of them are designed to exploit the smooth exponential structure of the 3D dataset. In this thesis, we exploit the exponential structure of the signal at every pixel location and the spatial smoothness of the parameters to derive a 3D annihilation relation in the Fourier domain. This relation translates into a product of a Hankel/Toeplitz structured matrix, formed from the k-t samples, and a vector of filter coefficients. We show that this matrix has a low rank structure, which is exploited to recover the images from undersampled measurements. We demonstrate the proposed method on the problem of MR parameter mapping. We compare the algorithm with the state-of-the-art methods and observe that the proposed reconstructions and parameter maps have fewer artifacts and errors. We extend the structured low rank framework to correct field inhomogeneity artifacts in MR images. We introduce novel approaches for field map compensation for data acquired using Cartesian and non-Cartesian trajectories. We adopt the time segmentation approach and reformulate the artifact correction problem into a recovery of time series of images from undersampled measurements. Upon recovery, the first image of the series will correspond to the distortion-free image. With the above re-formulation, we can assume that the signal at every pixel follows an exponential signal characterized by field map and the damping constant R2*. We exploit the smooth exponential structure of the 3D dataset to derive a low rank structured matrix prior, similar to the parameter mapping case. We demonstrate the algorithm on spherical MR phantom and human data and show that the artifacts are greatly reduced compared to the uncorrected images. Finally, we develop a structured matrix recovery framework to accelerate cardiac breath-held MRI. We model the cardiac image data as a 3D piecewise constant function. We assume that the zeros of a 3D trigonometric polynomial coincides with the edges of the image data, resulting in a Fourier domain annihilation relation. This relation can be compactly expressed in terms of a structured low rank matrix. We exploit this low rank property to recover the cardiac images from undersampled measurements. We demonstrate the superiority of the proposed technique over conventional sparsity and smoothness based methods. Though the model assumed here is not exponential, yet the proposed algorithm is closely related to that developed for parameter mapping. The direct implementation of the algorithms has a high memory demand and computational complexity due to the formation and storage of a large multi-fold Toeplitz matrix. Till date, the practical utility of such algorithms on high dimensional datasets has been limited due to the aforementioned reasons. We address these issues by introducing novel Fourier domain approximations which result in a fast and memory efficient algorithm for the above-mentioned applications. Such approximations allow us to work with large datasets efficiently and eliminate the need to store the Toeplitz matrix. We note that the algorithm developed for exponential recovery is general enough to be applied to other applications beyond MRI.
8

Learning on Complex Simulations

Banfield, Robert E 11 April 2007 (has links)
This dissertation explores Machine Learning in the context of computationally intensive simulations. Complex simulations such as those performed at Sandia National Laboratories for the Advanced Strategic Computing program may contain multiple terabytes of data. The amount of data is so large that it is computationally infeasible to transfer between nodes on a supercomputer. In order to create the simulation, data is distributed spatially. For example, if this dissertation was to be broken apart spatially, the binding might be one partition, the first fifty pages another partition, the top three inches of every remaining page another partition, and the remainder confined to the last partition. This distribution of data is not conducive to learning using existing machine learning algorithms, as it violates some standard assumptions, the most important being that data is independently and identically distributed (i.i.d.). Unique algorithms must be created in order to deal with the spatially distributed data. Another problem which this dissertation addresses is learning from large data sets in general. The pervasive spread of computers into so many areas has enabled data capture from places that previously did not have available data. Various algorithms for speeding up classification of small and medium-sized data sets have been developed over the past several years. Most of these take advantage of developing a multiple classifier system in which the fusion of many classifiers results in higher accuracy than that obtained by any single classifier. Most also have a direct application to the problem of learning from large data sets. In this dissertation, a thorough statistical analysis of several of these algorithms is provided on 57 publicly available data sets. Random forests, in particular, is able to achieve some of the highest accuracy results while speeding up classification significantly. Random forests, through a classifier fusion strategy known as Probabilistic Majority Voting (PMV) and a variant referred to as Weighted Probabilistic Majority Voting (wPMV), was used on two simulations. The first simulation is of a canister being crushed in the same fashion as a human might crush a soda can. Each of half a million physical data points in the simulation contains nine attributes. In the second simulation, a casing is dropped on the ground. This simulation contains 21 attributes and over 1,500,000 data points. Results show that reasonable accuracy can be obtained by using PMV or wPMV, but this accuracy is not as high as using all of the data in a non-spatially partitioned environment. In order to increase the accuracy, a semi-supervised algorithm was developed. This algorithm is capable of increasing the accuracy several percentage points over that of using all of the non-partitioned data, and includes several benefits such as reducing the number of labeled examples which scientists would otherwise manually identify. It also depicts more accurately the real-world usage situations which scientists encounter when applying these Machine Learning techniques to new simulations.
9

Lung Clearance Index as a Marker of Ventilation Inhomogeneity in Early Childhood with Health and Disease

Brown, Meghan 05 December 2011 (has links)
Rationale: Ventilation inhomogeneity (VI) may be an early sign of obstructive airway disease. The lung clearance index (LCI) has been suggested as a sensitive marker of VI, although it has not been well characterized in young children in health and in those with CF and asthma. Objective: To determine if LCI can detect VI in asymptomatic infants and preschool-age subjects with CF or wheeze/asthma compared to healthy controls. Methods: Sulphur hexafluoride (SF6) multiple breath washout (MBW) testing was completed in all subjects. Results: LCI was found to be dependent on age in a large healthy cohort. Accounting for age, LCI was significantly elevated in disease groups compared to healthy controls in early childhood, illustrating early presence of VI in wheezy infants and the progression of disease in CF. Furthermore, the effects of breathing pattern and the variability of MBW parameters showed positive associations with age and VI.
10

Lung Clearance Index as a Marker of Ventilation Inhomogeneity in Early Childhood with Health and Disease

Brown, Meghan 05 December 2011 (has links)
Rationale: Ventilation inhomogeneity (VI) may be an early sign of obstructive airway disease. The lung clearance index (LCI) has been suggested as a sensitive marker of VI, although it has not been well characterized in young children in health and in those with CF and asthma. Objective: To determine if LCI can detect VI in asymptomatic infants and preschool-age subjects with CF or wheeze/asthma compared to healthy controls. Methods: Sulphur hexafluoride (SF6) multiple breath washout (MBW) testing was completed in all subjects. Results: LCI was found to be dependent on age in a large healthy cohort. Accounting for age, LCI was significantly elevated in disease groups compared to healthy controls in early childhood, illustrating early presence of VI in wheezy infants and the progression of disease in CF. Furthermore, the effects of breathing pattern and the variability of MBW parameters showed positive associations with age and VI.

Page generated in 0.068 seconds