Spelling suggestions: "subject:"desolution"" "subject:"cesolution""
461 |
Dna electrophoresis in photopolymerized polyacrylamide gels on a microfluidic deviceLo, Chih-Cheng 15 May 2009 (has links)
DNA gel electrophoresis is a critical analytical step in a wide spectrum of genomic
analysis assays. Great efforts have been directed to the development of
miniaturized microfluidic systems (“lab-on-a-chip” systems) to perform low-cost,
high-throughput DNA gel electrophoresis. However, further progress toward
dramatic improvements of separation performance over ultra-short distances requires
a much more detailed understanding of the physics of DNA migration in
the sieving gel matrix than is currently available in literature. The ultimate goal
would be the ability to quantitatively determine the achievable level of separation
performance by direct measurements of fundamental parameters (mobility, diffusion,
and dispersion coefficients) associated with the gel matrix instead of the
traditional trial-and-error process.
We successfully established this predicting capability by measuring these fundamental
parameters on a conventional slab gel DNA sequencer. However, it is difficult to carry out fast and extensive measurements of these parameters on a conventional
gel electrophoresis system using single-point detection (2,000 hours on
the slab gel DNA sequencer we used).
To address this issue, we designed and built a new automated whole-gel scanning
detection system for a systematic investigation of these governing parameters on
a microfluidic gel electrophoresis device with integrated on-chip electrodes, heaters,
and temperature sensors. With this system, we can observe the progress of
DNA separation along the whole microchannel with high temporal and spatial
accuracy in nearly real time. This is in contrast to both conventional slab gel imaging
where the entire gel can be monitored, but only at one time frame after
completion of the separation, and capillary electrophoresis systems that allows
detection as a function of time, but only at a single detection location.
With this system, a complete set of mobility, diffusion, and dispersion data can be
collected within one hour instead of days or even months of work on a conventional
sequencer under the same experimental conditions. The ability to acquire
both spatial and temporal data simultaneously provides a more detailed picture of
the separation process that can potentially be used to refine theoretical models
and improve separation performance over ultra-short distances for the nextgeneration
of electrophoresis technology.
|
462 |
Uncertainty of microwave radiative transfer computations in rainHong, Sung Wook 02 June 2009 (has links)
Currently, the effect of the vertical resolution on the brightness temperature (BT)
has not been examined in depth. The uncertainty of the freezing level (FL) retrieved
using two different satellites' data is large. Various radiative transfer (RT) codes yield
different BTs in strong scattering conditions.
The purposes of this research were: 1) to understand the uncertainty of the BT
contributed by the vertical resolution numerically and analytically; 2) to reduce the
uncertainty of the FL retrieval using new thermodynamic observations; and 3) to
investigate the characteristics of four different RT codes.
Firstly, a plane-parallel RT Model (RTM) of n layers in light rainfall was used for
the analytical and computational derivation of the vertical resolution effect on the BT.
Secondly, a new temperature profile based on observations was absorbed in the Texas
A&M University (TAMU) algorithm. The Precipitation Radar (PR) and Tropical
Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) data were utilized for
the improved FL retrieval. Thirdly, the TAMU, Eddington approximation (EDD), Discrete Ordinate, and backward Monte Carlo codes were compared under various view
angles, rain rates, FLs, frequencies, and surface properties. The uncertainty of the BT
decreased as the number of layers increased. The uncertainty was due to the optical
thickness rather than due to relative humidity, pressure distribution, water vapor, and
temperature profile. The mean TMI FL showed a good agreement with mean bright band
height. A new temperature profile reduced the uncertainty of the TMI FL by about 10%.
The differences of the BTs among the four different RT codes were within 1 K at the
current sensor view angle over the entire dynamic rain rate range of 10-37 GHz. The
differences between the TAMU and EDD solutions were less than 0.5 K for the specular
surface.
In conclusion, this research suggested the vertical resolution should be considered
as a parameter in the forward model. A new temperature profile improved the TMI FL in
the tropics, but the uncertainty still exists with low FL. Generally, the four RT codes
agreed with each other, except at nadir, near limb or in heavy rainfall. The TAMU and
the EDD codes had better agreement than other RT codes.
|
463 |
Discretization and Approximation Methods for Reinforcement Learning of Highly Reconfigurable SystemsLampton, Amanda K. 2009 December 1900 (has links)
There are a number of techniques that are used to solve reinforcement learning
problems, but very few that have been developed for and tested on highly reconfigurable
systems cast as reinforcement learning problems. Reconfigurable systems
refers to a vehicle (air, ground, or water) or collection of vehicles that can change its
geometrical features, i.e. shape or formation, to perform tasks that the vehicle could
not otherwise accomplish. These systems tend to be optimized for several operating
conditions, and then controllers are designed to reconfigure the system from one operating
condition to another. Q-learning, an unsupervised episodic learning technique
that solves the reinforcement learning problem, is an attractive control methodology
for reconfigurable systems. It has been successfully applied to a myriad of control
problems, and there are a number of variations that were developed to avoid or alleviate
some limitations in earlier version of this approach. This dissertation describes the
development of three modular enhancements to the Q-learning algorithm that solve
some of the unique problems that arise when working with this class of systems, such
as the complex interaction of reconfigurable parameters and computationally intensive
models of the systems. A multi-resolution state-space discretization method is developed
that adaptively rediscretizes the state-space by progressively finer grids around
one or more distinct Regions Of Interest within the state or learning space. A genetic
algorithm that autonomously selects the basis functions to be used in the approximation of the action-value function is applied periodically throughout the learning
process. Policy comparison is added to monitor the state of the policy encoded in the
action-value function to prevent unnecessary episodes at each level of discretization.
This approach is validated on several problems including an inverted pendulum, reconfigurable
airfoil, and reconfigurable wing. Results show that the multi-resolution
state-space discretization method reduces the number of state-action pairs, often by
an order of magnitude, required to achieve a specific goal and the policy comparison
prevents unnecessary episodes once the policy has converged to a usable policy. Results
also show that the genetic algorithm is a promising candidate for the selection
of basis functions for function approximation of the action-value function.
|
464 |
Design and Construction of a Low Temperature Scanning Tunneling MicroscopeChen, Chi 2010 August 1900 (has links)
A low temperature scanning tunneling microscope (LTSTM) was built that we could use in an ultra high vacuum (UHV) system. The scanning tunneling microscope (STM) was tested on an existing 3He cryostat and calibrated at room, liquid nitrogen and helium temperatures. We analyzed the operational electronic and vibration noises and made some effective improvements. To demonstrate the capabilities of the STM, we obtained atomically resolved images of the Au (111) and graphite surfaces. In addition, we showed that the stable tunneling junctions can be formed between the Pt/Ir tip and a superconducting thin film PbBi.
We observed the atomic corrugation on Au (111) and measured the height of the atomic steps to be approximately2.53Å, which agrees with published values. In our images of the graphite surface, we found both the β atoms triangular structure, as well as the complete α-β hexagonal unit cell, using the same tip and the same bias voltage of 0.2V. The successful observation of the hidden α atoms of graphite is encouraging in regards to the possibility of imaging other materials with atomic resolution using our STM.
We also demonstrated that stable tunneling junctions can be formed at various temperatures. To demonstrate this, the superconducting current-voltage and differential conductance-voltage characteristics of a PbBi film were measured from 1.1K to 9K From this data, the temperature dependent energy gap of the superconductor was shown to be consistent with the predictions of the Bardeen, Cooper, and Schrieffer (BCS) theory.
|
465 |
Increased signal intensity of the cochlea on pre- and post-contrast enhanced 3D-FLAIR in patients with vestibular schwannomaNakashima, Tsutomu, Fukatsu, Hiroshi, Nihashi, Takashi, Kawai, Hisashi, Naganawa, Shinji, Yamazaki, Masahiro 12 1900 (has links)
名古屋大学博士学位論文 学位の種類 : 博士(医学)(課程) 学位授与年月日:平成22年9月28日 山崎雅弘氏の博士論文として提出された
|
466 |
Using Subpixel Technology in Contour Recognition on Low-resolution Hexagonal ImagesLee, Yorker 08 June 2000 (has links)
Pattern recognition is very important in automatic industry. The automation machinery vision system must exchange information very fast with the object we need. So the machinery vision system must have powerful recognition ability.
There are more important on image processing, lately. But most researches of image processing are developed on high-resolution image. However, in same situation, for increasing the processing speed, reducing the saving space.
Low-resolution image are the only way to achieve the above condition up to now. For the purpose of quickly recognition, we construct the recognition system on low-resolution image.
From observing the characteristic of hexagonal grid, we knew the hexagonal grid have greater angular resolution and better image performance than rectangular grid. Therefore, we apply the hexagonal grid on low-resolution image, and using Curve Bend Function (call CBF) on hexagonal gird system; for promoting the accuracy of recognition.We presented an technique of subpixel on low-resolution hexagonal image to obtain better results.
|
467 |
Development of a variable-temperature ion mobility/ time-of-flight mass spectrometer for separation of electronic isomersVerbeck, Guido Fridolin 29 August 2005 (has links)
The construction of a liquid nitrogen-cooled ion mobility spectrometer coupled
with time-of-flight mass spectrometry was implemented to demonstrate the ability to
discriminate between electronic isomers. Ion mobility allows for the separation of ions
based on differing cross-sections-to-charge ratio. This allows for the possible
discrimination of species with same mass if the ions differ by cross-section. Time-offlight
mass spectrometry was added to mass identify the separated peak for proper
identification.
A liquid nitrogen-cooled mobility cell was employed for a two-fold purpose.
First, the low temperatures increase the peak resolution to aid in resolving the separated
ions. This is necessary when isomers may have similar cross-sections. Second, low
temperature shortens the mean free path and decreases the neutral buffer gas speeds
allowing for more interactions between the ions and the drift gas. Kr2+ study was
performed to verify instrument performance.
The variable-temperature ion mobility spectrometer was utilized to separate the
distonic and conventional ion forms of CH3OH, CH3F, and CH3NH2 and to discriminate
between the keto and enol forms of the acetone radical cation. Density functional theory
and ab initio calculations were employed to aid in proper identification of separating
isomers. Monte Carlo integration tools were also developed to predict ion cross-section
and resolution within a buffer gas.
|
468 |
Image resolution analysis: a new, robust approach to seismic survey designTzimeas, Constantinos 29 August 2005 (has links)
Seismic survey design methods often rely on qualitative measures to provide an optimal image of their objective target. Fold, ray tracing techniques counting ray hits on binned interfaces, and even advanced 3-D survey design methods that try to optimize o?set and azimuth coverage are prone to fail (especially in complex geological or structural settings) in their imaging predictions. The reason for the potential failure of these commonly used approaches derives from the fact that they do not take into account the ray geometry at the target points. Inverse theory results can provide quantitative and objective constraints on acquisition design. Beylkin??s contribution to this ?eld is an elegant and simple equation describing a reconstructed point scatterer given the source/receiver distribution used in the imaging experiment. Quantitative measures of spatial image resolution were developed to assess the e?cacy of competing acquisition geometries. Apart from the source/receiver con?guration, parameters such as the structure and seismic velocity also in?uence image resolution. Understanding their e?ect on image quality, allows us to better interpret the resolution results for the surveys under examination. A salt model was used to simulate imaging of target points located underneath and near the ?anks of the diapir. Three di?erent survey designs were examined. Results from these simulations show that contrary to simple models, near-o?sets do not always produce better resolved images than far-o?sets. However, consideration of decreasing signal-to-noise ratio revealed that images obtained from the far-o?set experiment are degrading faster than the near-o?set ones. The image analysis was performed on VSP ?eld data as well as synthetics generated by ?nite di?erence forward modeling. The predicted image resolution results were compared to measured resolution from the migrated sections of both the ?eld data and the synthetics. This comparison con?rms that image resolution analysis provides as good a resolution prediction as the prestack Kirchho? depth migrated section of the synthetic gathers. Even in the case of the migrated ?eld data, despite the presence of error introducing factors (di?erent signal-to-noise ratios, shape and frequency content of source wavelets, etc.), image resolution performed well exhibiting the same trends of resolution changes at di?erent test points.
|
469 |
A high resolution geophysical investigation of spatial sedimentary processes in a paraglacial turbid outwash fjord: Simpson Bay, Prince William Sound, AlaskaNoll, Christian John, IV 12 April 2006 (has links)
Simpson Bay is a turbid, outwash fjord located in northeastern Prince William Sound, Alaska. A
high ratio of watershead:basin surface area combined with high precipitation and an easily erodable
catchment create high sediment inputs. Fresh water from heavy precipitation and meltwater from high
alpine glaciers enter Simpson Bay through bay head rivers and small shoreline creeks that drain the
catchment. Side scan sonar, seismic profiling, and high resolution bathymetry were used to investigate the
record of modern sedimentary processes. Four bottom types and two seismic faces were described to
delineate the distribution of sediment types and sedimentary processes in Simpson Bay. Sonar images
showed areas of high backscatter (coarse grain sediment, bedrock outcrops and shorelines) in shallow
areas and areas of low backscatter (estuarine mud) in deeper areas. Seismic profiles showed that high
backscatter areas reflected emergent glacial surfaces while low backscatter areas indicated modern
estuarine mud deposition. The data show terminal morainal bank systems and grounding line deposits at
the mouth of the bay and rocky promontories, relict medial moraines, that extend as terrestrial features
through the subtidal and into deeper waters. Tidal currents and mass wasting are the major influences on
sediment distribution. Hydrographic data showed high spatial variability in surface and bottom currents
throughout the bay. Bottom currents are tide dominated, and are generally weak (5-20 cm s-1) in the open
water portions of the bay while faster currents are found associated with shorelines, outcrops, and
restrictive sills. Tidal currents alone are not enough to cause the lack of estuarine mud deposition in
shallow areas. Bathymetric data showed steep slopes throughout the bay suggesting sediment gravity
flows. Central Alaska is a seismically active area, and earthquakes are most likely the triggering
mechanism of the gravity flows.
|
470 |
Causal analysis and resolution for software development problemsLiang, Ting-wei 04 July 2009 (has links)
In recent years, it has to spend lots of time and effort to get the certification of CMMI. Therefore, everyone is looking to tools or methods for speeding up the CMMI certification. CMMI level five, causal analysis and resolution, is an important issues for all industries. In the process of software development, we have to identify the causes for defects at first. Then, it uses a systematic approach to sum up the necessary causes for software defects in order to help managers make better decisions and develop action items. With no doubt, it is a very important issue in the process of software development.
This study aims to explore the subject of using the methods of causal analysis and resolution to solve the problems of software defects. Through the implementation of CAR, we can determine the root causes of defects and avoid importing defects to products. This study focus on the implementations of CAR and it proposes the methods, procedures and management forms. Moreover, this study will introduce the Mill¡¦s methods for causal reasoning used in the structure of CAR. Therefore it can help managers with a better way to sum up the causes for defects.
The study uses case study method. Firstly, it connects the company for data collection of cause and effect diagram and combines the Mill¡¦s methods to inductive causal and analysis. Then it arranges interviews with the company managers to identify the necessary causes of defects. Finally, it helps the company develop action items in order to achieve the causal analysis and resolutions in the process of software development.
|
Page generated in 0.1091 seconds