Spelling suggestions: "subject:"desite"" "subject:"17site""
601 |
Engineering and analysis of protease fine specificity via site-directed mutagenesisFlowers, Crystal Ann 08 October 2013 (has links)
Altering the substrate specificity of proteases is a powerful process with possible applications in many areas of therapeutics as well as proteomics. Although the field is still developing, several proteases have been successfully engineered to recognize novel substrates. Previously in our laboratory, eight highly active OmpT variants were engineered with novel catalytic sites. The present study examined the roles of several residues surrounding the active site of OmpT while attempting to use rational design to modulate fine specificity enough to create a novel protease that prefers phosphotyrosine containing substrates relative to sulfotyrosine or unmodified tyrosine residues.
In particular, a previously engineered sulfotyrosine-specific OmpT variant (Varadarajan et al., 2008) was the starting point for rationally designing fifteen new OmpT variants in an attempt to create a highly active protease that would selectively cleave phosphotyrosine substrates. Our design approach was to mimic the most selective phosphoryl-specific enzymes and binding proteins by increasing positive charge around the active site. Sulfonyl esters have a net overall charge of -1 near neutral pH, while phosphate monoesters have a net overall charge of -2.
Selected active site residues were mutated by site-directed mutagenesis to lysine, arginine, and histidine. The catalytic activities and substrate specificities of each variant were characterized. Although several variants displayed altered substrate specificity, none preferred phosphotyrosine over sulfotyrosine containing peptides.
Taken together, our results have underscored the subtle nature of protease substrate specificity and how elusive it can be to engineer fine specificity. Apparently, phosphotyrosine specific variants were not possible within the context of our starting sulfotyrosine specific OmpT derivative mutated to have single amino acid changes chosen on the basis of differential charge interactions. / text
|
602 |
Field investigation of topographic effects using mine seismicityWood, Clinton Miller 16 October 2013 (has links)
This dissertation details work aimed at better understanding topographic effects in earthquake ground motions. The experiment, conducted in Central-Eastern Utah, used frequent and predictable seismicity produced by underground longwall coal mining as a source of low-intensity ground motions. Locally-dense arrays of seismometers deployed over various topographic features were used to passively monitor seismic energy produced by mining-induced implosions and/or stress redistribution in the subsurface. The research consisted of two separate studies: an initial feasibility experiment (Phase I) followed by a larger-scale main study (Phase II). Over 50 distinct, small-magnitude (M[subscript 'L'] < 1.6) seismic events were identified in each phase. These events were analyzed for topographic effects in the time domain using the Peak Ground Velocity (PGV), and in the frequency domain using the Standard Spectral Ratio (SSR) method, the Median Reference Method (MRM), and the Horizontal-to-Vertical Spectral Ratio (HVSR) method. The polarities of the horizontal ground motions were also visualized using directional analyses. The various analysis methods were compared to assess their ability to estimate amplification factors and determine the topographic frequencies of interest for each feature instrumented. The MRM was found to provide the most consistent, and presumably accurate, estimates of the amplification factor and frequency range for topographic effects. Results from this study clearly indicated that topographic amplification of ground motions does in fact occur. These amplifications were very frequency dependent, and the frequency range was correctly estimated in many, but not all, cases using simplified, analytical methods based on the geotechnical and geometrical properties of the topography. Amplifications in this study were found to generally range from 2 to 3 times a reference/baseline site condition, with some complex 3D features experiencing amplifications as high as 10. Maximum amplifications occurred near the crest of topographic features with slope angles greater than approximately 15 degrees, and the amplifications were generally oriented in the direction of steepest topographic relief, with some dependency on wave propagation direction. / text
|
603 |
Laboratory study of calcium based sorbents impacts on mercury bioavailability in contaminated sedimentsMartinez, Alexandre Mathieu Pierre 22 October 2013 (has links)
Mercury -contaminated sediments often act as a sink of mercury and produce methyl-mercury, an acute neurotoxin which readily bio accumulates, due to the presence of bacterial communities hosted by the sediment. One common remediation approach to manage methyl-mercury is to amend the sediment by capping or directly mixing with a sorbent. This thesis aims to assess the capabilities of some calcium-based sorbent to act in that capacity. Laboratory experiments were implemented to simulate mercury fate and behavior in geochemical conditions that capping would likely create. Well-mixed slurries showed that gypsum materials were disparate and their behavior was similar from sand to organocaly. Mercury sorption capacities of these gypsums were poor with a sorption coefficient approximately equal to 300 L/kg. Reduction of methylmercury was minimal and even increased in two of the three materials. Therefore, the three gypsums, which tend to be more cohesive when wetted, doesn’t constitute a viable material for sediment capping. / text
|
604 |
Characterization of sources of radioargon in a research reactorFay, Alexander Gary 27 June 2014 (has links)
On Site Inspection is the final measure for verifying compliance of Member States with the Comprehensive Nuclear-Test-Ban Treaty. In order to enable the use of ³⁷Ar as a radiotracer for On Site Inspection, the sources of radioargon background must be characterized and quantified. A radiation transport model of the University of Texas at Austin Nuclear Engineering Teaching Laboratory (NETL) TRIGA reactor was developed to simulate the neutron flux in various regions of the reactor. An activation and depletion code was written to calculate production of ³⁷Ar in the facility based on the results of the radiation transport model. Results showed ³⁷Ar production rates of (6.567±0.31)×10² Bq·kWh⁻¹ in the re- actor pool and the air-filled irradiation facilities, and (5.811±0.40)×10⁴ Bq·kWh⁻¹ in the biological shield. Although ⁴⁰Ca activation in the biological shield was found to dominate the total radioargon inventory, the contribution to the effluent release rate would be diminished by the immobility of Ar generated in the concrete matrix and the long diffusion path of mobile radioargon. Diffusion of radioargon out of the reactor pool was found to limit the release rate but would not significantly affect the integrated release activity. The integrated ³⁷Ar release for an 8 hour operation at 950 kW was calculated to be (1.05±0.8)×10⁷ Bq, with pool emissions continuing for days and biological shield emissions continuing for tens of days following the operation. Sensitivity analyses showed that estimates for the time-dependent concentrations of ³⁷Ar in the NETL TRIGA could be made with the calculated buildup coefficients or through analytical solution of the activation equations for only (n,[gamma]) reactions in stable argon or (n,[alpha]) reactions in ⁴⁰Ca. Analyses also indicated that, for a generalized system, the integrated thermal flux can be used to calculate the buildup due to air activation and the integrated fast flux can be used to calculate the buildup due to calcium activation. Based on the results of the NETL TRIGA, an estimate of the global research reactor source term for ³⁷Ar and an estimate of ground-level ³⁷Ar concentrations near a facility were produced. / text
|
605 |
Evaluation of one-dimensional site response methodologies using borehole arraysZalachoris, Georgios 02 July 2014 (has links)
Numerical modeling techniques commonly used to compute the response of soil and rock media under earthquake shaking are evaluated by analyzing the observations provided by instrumented borehole arrays. The NIED Kik-Net database in Japan is selected as the main source of borehole array data for this study. The stiffness of the site and the availability of high intensity motions are the primary factors considered towards the selection of appropriate Kik-Net borehole arrays for investigation. Overall, 13 instrumented vertical arrays are investigated using over 750 recorded ground motions characterized by low (less than 0.05 g) to high (greater than 0.3 g) recorded peak ground accelerations at the downhole sensor. Based on data from the selected borehole arrays, site response predictions using 1-D linear elastic (LE) analysis, equivalent linear (EQL) analysis, equivalent linear analysis with frequency-dependent soil properties (EQL-FD), and fully nonlinear analysis (NL) are compared with the borehole observations. Initially, the low intensity motions are used to evaluate common assumptions regarding 1-D site response analysis. First, we identify the borehole wavefield best simulating the actual boundary condition at depth by comparing the theoretical linear-elastic (LE) and observed responses. Then, we identify the best-fit small-strain damping profiles that can incorporate the additional in-situ attenuation mechanisms. Finally, we assess the validity of the one-dimensional modeling assumption. Our analyses indicate that the appropriate boundary condition for analysis of a borehole array depends on the depth of the borehole sensor and that, for most of the considered vertical arrays, the one-dimensional assumption reasonably simulates the actual wave propagation pattern. In the second part of this study, we evaluate the accuracy of the EQL, EQL-FD and NL site response methods by quantifying the misfit (i.e., residual) between the simulations and observations at different levels of shaking. The evaluation of the performance of the theoretical models is made both on a site-by-site basis and in an aggregated manner. Thereafter, the variability in the predicted response from the three site response methods is assessed. Comparisons with the observed responses indicate that the misfit of simulations can be significant at short periods and large strains. Moreover, all models seem to be characterized by the same level of variability irrespectively of the level of shaking. Finally, several procedures that can be used to improve the accuracy of the one-dimensional EQL, EQL-FD and NL site response analyses, are investigated. First, an attempt to take into account the shear strength of the soil materials at large shear strains is made. Additionally, several modifications to the EQL-FD approach are proposed. The proposed modifications are evaluated against recordings from the borehole arrays. Our analyses indicate that the accuracy of the theoretical models can be, partly, increased by incorporating the proposed modifications. / text
|
606 |
Probabilistic assessments of the seismic stability of slopes : improvements to site-specific and regional analysesWang, Yubing 03 July 2014 (has links)
Earthquake-induced landslides are a significant seismic hazard that can generate large economic losses. Predicting earthquake-induced landslides often involves an assessment of the expected sliding displacement induced by the ground shaking. A deterministic approach is commonly used for this purpose. This approach predicts sliding displacements using the expected ground shaking and the best-estimate slope properties (i.e., soil shear strengths, ground water conditions and thicknesses of sliding blocks), and does not consider the aleatory variability in predictions of ground shaking or sliding displacements or the epistemic uncertainties in the slope properties. In this dissertation, a probabilistic framework for predicting the sliding displacement of flexible sliding masses during earthquakes is developed. This framework computes a displacement hazard curve using: (1) a ground motion hazard curve from a probabilistic seismic hazard analysis, (2) a model for predicting the dynamic response of the sliding mass, (3) a model for predicting the sliding response of the sliding mass, and (4) a logic tree that incorporates the uncertainties in the various input parameters. The developed probabilistic framework for flexible sliding masses is applied to a slope at a site in California. The results of this analysis show that the displacements predicted by the probabilistic approach are larger than the deterministic approach due to the influence of the uncertainties in the slope properties. Reducing these uncertainties can reduce the predicted displacements. Regional maps of seismic landslide potential are used in land-use planning and to identify zones that require detailed, site-specific studies. Current seismic landslide hazard mapping efforts typically utilize deterministic approaches to estimate rigid sliding block displacements and identify potential slope failures. A probabilistic framework that uses displacement hazard curves and logic-tree analysis is developed for regional seismic landslide mapping efforts. A computationally efficient approach is developed that allows the logic-tree approach to be applied for regional analysis. Anchorage, Alaska is used as a study area to apply the developed approach. With aleatory variability and epistemic uncertainties considered, the probabilistic map shows that the area of high/very high hazard of seismic landslides increases by a factor of 3 compared with a deterministic map. / text
|
607 |
Studies in bacterial genome engineering and its applicationsEnyeart, Peter James 12 August 2015 (has links)
Many different approaches exist for engineering bacterial genomes. The most common current methods include transposons for random mutagenesis, recombineering for specific modifications in Escherichia coli, and targetrons for targeted knock-outs. Site-specific recombinases, which can catalyze a variety of large modifications at high efficiency, have been relatively underutilized in bacteria. Employing these technologies in combination could significantly expand and empower the toolkit available for modifying bacteria.
Targetrons can be adapted to carry functional genetic elements to defined genomic loci. For instance, we re-engineered targetrons to deliver lox sites, the recognition target of the site-specific recombinase, Cre. We used this system on the E. coli genome to delete over 100 kilobases, invert over 1 megabase, insert a 12-kilobase polyketide-synthase operon, and translocate a 100 kilobase section to another site over 1 megabase away. We further used it to delete a 15-kilobase pathogenicity island from Staphylococcus aureus, catalyze an inversion of over 1 megabase in Bacillus subtilis, and simultaneously deliver nine lox sites to the genome of Shewanella oneidensis. This represents a powerful, versatile, and broad-host-range solution for bacterial genome engineering.
We also placed lox sites on mariner transposons, which we leveraged to create libraries of millions of strains harboring rearranged genomes. The resulting data represents the most thorough search of the space of potential genomic rearrangements to date. While simple insertions were often most adaptive, the most successful modification found was an inversion that significantly improved fitness in minimal media. This approach could be pushed further to examine swapping or cutting and pasting regions of the genome, as well.
As potential applications, we present work towards implementing and optimizing extracellular electron transfer in E. coli, as well as mathematical models of bacteria engineered to adhere to the principles of the economic concept of comparative advantage, which indicate that the approach is feasible, and furthermore indicate that economic cooperation is favored under more adverse conditions. Extracellular electron transfer has applications in bioenergy and biomechanical interfaces, while synthetic microbial economics has applications in designing consortia-based industrial bioprocesses. The genomic engineering methods presented above could be used to implement and optimize these systems. / text
|
608 |
Issues related to site property variability and shear strength in site response analysisGriffiths, Shawn Curtis 18 September 2015 (has links)
Nonlinear site response analyses are generally preferred over equivalent linear analyses for soft soil sites subjected to high-intensity input ground motions. However, both nonlinear and equivalent linear analyses often result in large induced shear strains (3-10%) at soft sites, and these large strains may generate unusual characteristics in the predicted surface ground motions. One source of the overestimated shear strains may be attributed to unrealistically low shear strengths implied by commonly used modulus reduction curves. Therefore, modulus reduction and damping curves can be modified at shear strains greater than 0.1% to provide a more realistic soil model for site response. However, even after these modifications, nonlinear and equivalent linear site response analyses still may generate unusual surface acceleration time histories and Fourier amplitude spectra at soft soil sites when subjected to high-intensity input ground motions. As part of this work, equivalent linear and nonlinear 1D site response analyses for the well-known Treasure Island site demonstrate the challenges associated with accurately modeling large shear strains, and subsequent surface response, at soft soil sites. Accounting for the uncertainties associated with the shear wave velocity profile is an important part of a properly executed site response analyses. Surface wave data from Grenoble, France and Mirandola, Italy have been used to determine shear wave velocity (Vs) profiles from inversion of surface wave data. Furthermore, Vs profiles from inversion have been used to determine boundary, median and statistically-based randomly generated profiles. The theoretical dispersion curves from the inversion analyses as well as the boundary, median and randomly generated Vs profiles are compared with experimentally measured surface wave data. It is found that the median theoretical dispersion curve provides a satisfactory fit to the experimental data, but the boundary type theoretical dispersion curves do not. Randomly generated profiles result in some theoretical dispersion curves that fit the experimental data, and many that do not. Site response analyses revealed that the greater variability in the response spectra and amplification factors were determined from the randomly generated Vs profiles than the inversion or boundary Vs profiles.
|
609 |
A STATISTICAL ANALYSIS OF ACTIVITY ORGANIZATION: GRASSHOPPER PUEBLO, ARIZONACiolek-Torrello, Richard Sigmund, 1949- January 1978 (has links)
No description available.
|
610 |
The Palenque mapping project: settlement and urbanism at an ancient Maya cityBarnhart, Edwin Lawrence 15 March 2011 (has links)
Not available / text
|
Page generated in 0.0388 seconds