• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 32
  • 29
  • 23
  • 15
  • 9
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 403
  • 109
  • 61
  • 59
  • 58
  • 56
  • 45
  • 43
  • 40
  • 38
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Evaluation of Coarse Sun Sensor in a Miniaturized Distributed Relative Navigation System: An Experimental and Analytical Investigation

Maeland, Lasse 2011 May 1900 (has links)
Observing the relative state of two space vehicles has been an active field of research since the earliest attempts at space rendezvous and docking during the 1960's. Several techniques have successfully been employed by several space agencies and the importance of these systems has been repeatedly demonstrated during the on-orbit assembly and continuous re-supply of the International Space Station. More recent efforts are focused on technologies that can enable fully automated navigation and control of space vehicles. Technologies which have previously been investigated or are actively researched include Video Guidance Systems (VGS), Light Detection and Ranging (LIDAR), RADAR, Differential GPS (DGPS) and Visual Navigation Systems. The proposed system leverages the theoretical foundation which has been advanced in the development of VisNav, invented at Texas A & M University, and the miniaturized commercially available Northstar sensor from Evolution Robotics. The dissertation first surveys contemporary technology, followed by an analytical investigation of the coarse sun sensor and errors associated with utilizing it in the near-field. Next, the commercial Northstar sensor is investigated, utilizing fundamentals to generate a theoretical model of its behavior, followed by the development of an experiment for the purpose of investigating and characterizing the sensor's performance. Experimental results are then presented and compared with a numerical simulation of a single-sensor system performance. A case study evaluating a two sensor implementation is presented evaluating the proposed system's performance in a multisensor configuration. The initial theoretical analysis relied on use of the cosine model, which proved inadequate in fully capturing the response of the coarse sun sensor. Fresenel effects were identified as a significant source of unmodeled sensor behavior and subsequently incorporated into the model. Additionally, near-field effects were studied and modeled. The near-field effects of significance include: unequal incidence angle, unequal incidence power, and non-uniform radiated power. It was found that the sensor displayed inherent instabilities in the 0.3 degree range. However, it was also shown that the sensor could be calibrated to this level. Methods for accomplishing calibration of the sensor in the near-field were introduced and feasibility of achieving better than 1 cm and 1 degree relative position and attitude accuracy in close proximity, even on a small satellite platform, was determined.
132

Initiation Of Motion Of Coarse Solitary Particles On Rough Channel Beds

Kucuktepe, Omer Ilker 01 December 2009 (has links) (PDF)
In this study the incipient motion of coarse solitary particles on channel beds having different roughness heights was experimentally investigated. The experiments were conducted in a tilting flume of a rectangular cross-section having a working length of 12 m and a rough bed composed of at least 2 layers of coarse gravel of almost constant size. The roughness material of the channel bed was changed three times. The slope of the channel bed and the discharge are two main parameters that determine the initiation of motion of a given particle. The artificial particles tested in the experiments were obtained by mixing cement and iron dust at certain ratios. Dimensionless hydraulic parameters determined from theoretical analysis were related to each other. Flow depths, velocity profiles were measured and flow conditions that represent the critical conditions of initiation of motion were expressed in terms of critical velocities and shear velocities. The results were compared with the previous studies&rsquo / results.
133

Engineering approaches to address erros in measured and predicted particulate matter concentrations

Wanjura, John David 16 August 2006 (has links)
Some of the air pollution regulations in the United States are based on an application of the National Ambient Air Quality Standards at the property line. Agricultural operations such as cotton gins, feed mills, and cattle feed yards may be inappropriately regulated by such regulations if the current methods of measuring and predicting the concentrations of regulated pollutants are used. The regulated particulate matter pollutants are those with aerodynamic equivalent diameters less than or equal to a nominal 10 and 2.5 micrometers (PM10 and PM2.5) respectively. The current Federal Reference Method PM10 and PM2.5 samplers exhibit oversampling errors when sampling dusts with particle size distributions similar to those of agricultural sources. These errors are due to the interaction of the performance characteristics of the sampler with the particle size distribution of the dust being sampled. The results of this work demonstrate the development of a new sampler that may be used to accurately sample total suspended particulate (TSP) concentrations. The particle size distribution of TSP samples can be obtained and used to more accurately determine PM10 and PM2.5 concentrations. The results of this work indicate that accurate measures of TSP can be taken on a low volume basis. This work also shows that the low volume samplers provide advantages in maintaining more consistent sampling flow rates, and more robust measurements of TSP concentrations in high dust concentrations. The EPA approved dispersion model most commonly used to estimate concentrations downwind from a stationary source is the Industrial Source Complex Short Term version 3 (ISCST3). ISCST3 is known to over-predict downwind concentrations from low level point sources. The results of this research show that the magnitude of these errors could be as much as 250%. A new approach to correcting these errors using the power law with P values as a function of stability class and downwind distance is demonstrated. Correcting the results of ISCST3 using this new approach results in an average estimated concentration reduction factor of 2.3.
134

Modeling and simulations of single stranded rna viruses

Boz, Mustafa Burak 21 June 2012 (has links)
The presented work is the application of recent methodologies on modeling and simulation of single stranded RNA viruses. We first present the methods of modeling RNA molecules using the coarse-grained modeling package, YUP. Coarse-grained models simplify complex structures such as viruses and let us study general behavior of the complex biological systems that otherwise cannot be studied with all-atom details. Second, we modeled the first all-atom T=3, icosahedral, single stranded RNA virus, Pariacoto virus (PaV). The x-ray structure of PaV shows only 35% of the total RNA genome and 88% of the capsid. We modeled both missing portions of RNA and protein. The final model of the PaV demonstrated that the positively charged protein N- terminus was located deep inside the RNA. We propose that the positively charged N- terminal tails make contact with the RNA genome and neutralize the negative charges in RNA and subsequently collapse the RNA/protein complex into an icosahedral virus. Third, we simulated T=1 empty capsids using a coarse-grained model of three capsid proteins as a wedge-shaped triangular capsid unit. We varied the edge angle and the potentials of the capsid units to perform empty capsid assembly simulations. The final model and the potential are further improved for the whole virus assembly simulations. Finally, we performed stability and assembly simulations of the whole virus using coarse-grained models. We tested various strengths of RNA-protein tail and capsid protein-capsid protein attractions in our stability simulations and narrowed our search for optimal potentials for assembly. The assembly simulations were carried out with two different protocols: co-transcriptional and post-transcriptional. The co-transcriptional assembly protocol mimics the assembly occurring during the replication of the new RNA. Proteins bind the partly transcribed RNA in this protocol. The post-transcriptional assembly protocol assumes that the RNA is completely transcribed in the absence of proteins. Proteins later bind to the fully transcribed RNA. We found that both protocols can assemble viruses, when the RNA structure is compact enough to yield a successful virus particle. The post-transcriptional protocol depends more on the compactness of the RNA structure compared to the co-transcriptional assembly protocol. Viruses can exploit both assembly protocols based on the location of RNA replication and the compactness of the final structure of the RNA.
135

A way of reducing the energy demand in TMP by shear/compression deformation

Viforr, Silvia January 2007 (has links)
<p>One of the major cost factors in mechanical pulp production is the electrical energy input. Much of the research in this field has therefore been devoted to an understanding of the mechanisms in the refining process and, consequently, to find ways of reducing the electrical energy consumption. Shear and compression are probably the main types of fibre deformation occurring in refiners for collapsing and fibrillating the fibres into a suitable pulp. In current refiners, the repeated mechanical action of the bars on the fibres consumes large amounts of energy in a treatment of mechanical fibres that is almost random.</p><p>Fundamental studies of the deformation of wood have indicated that a combination of shearing and compression forces is highly beneficial in terms of fibre deformation with a low energy demand. Pure compression is able to permanently deform the fibre but requires a substantial amount of work, while pure shearing, although being much less energy demanding, does not lead to any permanent deformations. A more suitable application of the shear and compression forces on the wood fibres during the refining process could be a way to develop fibres at a lower energy demand. These ideas have been studied in this work trying to find new ways of saving energy in the mechanical pulping process.</p><p>The first paper in this thesis discusses the way of producing wood shavings and the introduction of shear/compression deformations in these, as well as the potential benefits of using them instead of wood chips as raw material for TMP production. With the shaving process, high deformations in the wood cells were achieved by the shear and compression forces. This led to energy savings of about 25% at a given tensile index, when compared to traditional chips. The quality of the pulp produced from wood shavings was found to be better than that of the pulp produced from wood chips, when it came to strength properties (except for tear index) and optical properties at comparable energy levels.</p><p>Another way of reducing energy consumption in refining involving a limited shear combined with compression forces for the mechanical treatment of both wood chips and coarse fibres was also studied. This work shows that such a kind of treatment resulted in a high degree of fibre collapse at low energy demands. The thick-walled transition fibres could even be permanently deformed. Furthermore, refining trials, utilising shear and compression pre-treated chips, showed that the strength properties, except for tear index, along with the optical properties of a TMP could be improved and the electrical energy consumed could be reduced by approx. 100 kWh/tonne, when compared to untreated chips.</p><p>The results from the pilot trials described in this work could be used as a starting point for further implementation in the industry, in order to identify the most efficient way of producing mechanical pulp with a lower consumption of electrical energy.</p>
136

Woody debris and macroinvertebrate community structure of low-order streams in Colville National Forest, Washington

Rogers, Megan Bryn, January 2003 (has links) (PDF)
Thesis (M.S.)--Washington State University, 2003. / Title from PDF title page (viewed on May 22, 2005). Includes bibliographical references (p. 52-56).
137

Revamping aggregate property requirements for portland cement concrete

Stutts, Zachary William 18 June 2012 (has links)
Current Texas Department of Transportation (TxDOT) procedures for evaluating coarse aggregate for portland cement concrete (PCC) have been in place for over 39 years. Item 421 in the TxDOT "Standard Specifications for Construction and Maintenance of Highways, Streets, and Bridges" describes the tests and test limits that must be met by aggregates before they can be approved for use in portland cement concrete applications. The intention of Item 421 is to ensure that only strong, durable aggregates are used in concrete so that the life of concrete is not cut short by common distress mechanisms which ultimately lead to costly repairs and replacements. The two main tests currently used by TxDOT to evaluate aggregates are the magnesium sulfate soundness test and the Los Angeles abrasion and impact test. These tests are meant to characterize the overall soundness and resistance to abrasion and impact of an aggregate respectively. Unfortunately, past research has shown that the magnesium sulfate soundness and test and the Los Angeles abrasion and impact test are not able to successfully predict the field performance of an aggregate in concrete. The requirements of item 421 have thus far done a reasonably good job of ensuring long-lasting concrete; however the current tests and test limits may be unnecessarily precluding the use of some local materials. As high quality aggregate sources are depleted and transportation costs increase, it will become more necessary to distinguish good performers from marginal and poor performers in the future. If aggregate tests can be found that demonstrate better correlations with field performance, it may be possible to use more local aggregate sources and still provide the desired level of reliability for pavements, bridges, and other TxDOT concrete applications. Researchers are in the processing of collecting coarse and fine aggregates commonly used in Texas and testing these aggregates on a variety of alternative tests. Researchers will attempt to relate this test data to concrete behavior and ultimately recommend tests for improved TxDOT aggregate specifications. / text
138

Bayesian learning methods for potential energy parameter inference in coarse-grained models of atomistic systems

Wright, Eric Thomas 27 August 2015 (has links)
The present work addresses issues related to the derivation of reduced models of atomistic systems, their statistical calibration, and their relation to atomistic models of materials. The reduced model, known in the chemical physics community as a coarse-grained model, is calibrated within a Bayesian framework. Particular attention is given to developing likelihood functions, assigning priors on coarse-grained model parameters, and using data from molecular dynamics representations of atomistic systems to calibrate coarse-grained models such that certain physically relevant atomistic observables are accurately reproduced. The developed Bayesian framework is then applied in three case studies of increasing complexity and practical application. A freely jointed chain model is considered first for illustrative purposes. The next example entails the construction of a coarse-grained model for a liquid heptane system, with the explicit design goal of accurately predicting a vapor-liquid transfer free energy. Finally, a coarse-grained model is developed for an alkylthiophene polymer that has been shown to have practical use in certain types of photovoltaic cells. The development therein employs Bayesian decision theory to select an optimal CG potential energy function. Subsequently, this model is subjected to validation tests in a prediction scenario that is relevant to the performance of a polyalkylthiophene-based solar cell. / text
139

Selection, calibration, and validation of coarse-grained models of atomistic systems

Farrell, Kathryn Anne 03 September 2015 (has links)
This dissertation examines the development of coarse-grained models of atomistic systems for the purpose of predicting target quantities of interest in the presence of uncertainties. It addresses fundamental questions in computational science and engineering concerning model selection, calibration, and validation processes that are used to construct predictive reduced order models through a unified Bayesian framework. This framework, enhanced with the concepts of information theory, sensitivity analysis, and Occam's Razor, provides a systematic means of constructing coarse-grained models suitable for use in a prediction scenario. The novel application of a general framework of statistical calibration and validation to molecular systems is presented. Atomistic models, which themselves contain uncertainties, are treated as the ground truth and provide data for the Bayesian updating of model parameters. The open problem of the selection of appropriate coarse-grained models is addressed through the powerful notion of Bayesian model plausibility. A new, adaptive algorithm for model validation is presented. The Occam-Plausibility ALgorithm (OPAL), so named for its adherence to Occam's Razor and the use of Bayesian model plausibilities, identifies, among a large set of models, the simplest model that passes the Bayesian validation tests, and may therefore be used to predict chosen quantities of interest. By discarding or ignoring unnecessarily complex models, this algorithm contains the potential to reduce computational expense with the systematic process of considering subsets of models, as well as the implementation of the prediction scenario with the simplest valid model. An application to the construction of a coarse-grained system of polyethylene is given to demonstrate the implementation of molecular modeling techniques; the process of Bayesian selection, calibration, and validation of reduced-order models; and OPAL. The potential of the Bayesian framework for the process of coarse graining and of OPAL as a means of determining a computationally conservative valid model is illustrated on the polyethylene example. / text
140

Broadband Arrayed Waveguide Grating Multiplexers on InP

Rausch, Kameron Wade January 2005 (has links)
Coarse Wavelength Division Multiplexing (CWDM) is becoming a popular way to increase the optical throughput of fibers for short to medium haul networks at a reduced cost. The International Telecommunications Union (ITU) has defned the CWDM network to consist of eighteen channels with channel spacings of 20 nm starting at 1270 nm and ending at 1610 nm.Four and eight channel AWGs on InP, suitable for CWDM, were fabricated using a novel and versatile S-shape design. The standard horseshoe layout will not work on semiconductor for AWGs with a free spectral range (FSR) larger than 30 nm. The AWG design provides operation insensitive to thermal and polarization fluctuations, which is key for low cost operation and packaging. It will be shown thatrefractive index changes over the large operating wavelength band produced negligible effects in the transmission spectrum.Standard AWG design assumes refractive index is a constant over the operating wavelength band. As a result, the output waveguide separations are held constant on the second star coupler. As the channel number increases, secondary focal dispersion causedfrom a changing refractive index can have detrimental effects on performance. A new design method will be introduced which includes refractive index dispersion by allowing the output waveguide separations to vary. The new design is consistent with standard design but is applicable in materials with a linear index dispersion over an arbitrarily large wavelength band.Lastly, a method for increasing the transmission using multimode waveguides is discussed. Traditionally, single mode waveguides are required in order to prevent higher order waveguide modes creating ghost images in the output spectrum. Using bend loss and waveguide junction offsets, higher order modes can be filtered from the output,thereby eliminating ghost images and at the same time, increase transmission.

Page generated in 0.0389 seconds