• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 47
  • 27
  • 14
  • 14
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 280
  • 74
  • 63
  • 36
  • 34
  • 34
  • 25
  • 23
  • 23
  • 21
  • 21
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Liquid helium acoustic microscope

Steer, A. P. January 1987 (has links)
No description available.
2

Energy transfer in A²[Sigma]⁺OH

Lengel, Russell Kay, January 1976 (has links)
Thesis--Wisconsin. / Vita. Includes bibliographical references.
3

Segmentation and classification of cell nuclei in tissue sections

Mouroutis, Theodoros January 2000 (has links)
No description available.
4

High-Speed, Large Depth-of-Field and Automated Microscopic 3D Imaging

Liming Chen (18419367) 22 April 2024 (has links)
<p dir="ltr">Over the last few decades, three-dimensional (3D) optical imaging and sensing techniques have attracted much attention from both academia and industries. Owing to its capability of gathering more information than conventional 2D imaging, it has been successfully adopted in many applications on the macro scale which ranges from sub-meters to meters such as entertainment, commercial electronics, manufacturing, and construction. For example, the iPhone “FaceID” sensor is used for facial recognition, and the Microsoft Kinect is used to track body motion in video games. With recent advances in many technical fields, such as semiconductor packaging, additive manufacturing, and micro-robots, there is an increasing need for microscopic 3D imaging, and several techniques including interferometry, confocal microscopy, focus variation, and structured light have been developed and adopted in these industries. Among these techniques, the structured light 3D imaging technique is considered one of the most promising techniques for in-situ metrology, owing to its advantage of simple configuration and high measurement speed. However, several challenges must be addressed in employing the structured-light 3D imaging technique in these fields.</p><p dir="ltr">The first challenge is the limited measurement range caused by the limited depth of field (DOF). Given the necessity for large magnification in the microscopic structured light system, the DOF becomes notably shallow, especially when pin-hole lenses are adopted. This issue is exacerbated by the fact that the measured objects in the aforementioned industries could contain miniaturized features spanning a broad height range. To address this problem, we introduce the idea of the focus stacking technique, wherein the focused pixels gathered from various focus settings are merged to form an all-in-focus image, into the structured-light 3D imaging. We further developed a computational framework that utilizes the phase information and fringe contrast of the projected fringe patterns to mitigate the influence of object textures.</p><p dir="ltr">The second challenge is the 3D imaging speed. The 3D measurement speed is a crucial factor for in-situ applications. We improved the large DOF 3D imaging speed by reducing the required fringe images from two aspects: 1) We developed a calibration method for multifocus pin-hole mode, which can eliminate the necessity of the 2D image alignment. The conventional method based on circle patterns will be affected during the feature extraction process by the significant camera defocusing. In contrast, our proposed method is more robust since it uses virtual features extracted from a reconstructed white flat surface under a pre-calibrated focus setting. 2)We developed a phase unwrapping method with the assistance of the electrically tunable lens (ETL), which is an optical component we used to capture fringe images under various focus settings. The proposed phase unwrapping method leverages the focal plane position of each focus setting to estimate a rough depth map for the geometric-constraint phase unwrapping algorithm. By doing this, the method eliminates the limitation on the effective working depth range and becomes feasible in large DOF 3D imaging.</p><h4>Even with all previous methodologies, the efficiency of large DOF 3D imaging is still not high enough under certain circumstances. One of the major reasons is that we can still only use a series of pre-defined focus settings to run the focus stacking, since we have no prior on the measured objects. This issue could lead to low measurement efficiency when the depth range of the measured objects does not cover the whole enlarged DOF. To improve the performance of the system under such situations, we developed a method that introduces another computational imaging technique: the focal sweep technique, to help determine the optimal focus settings adapting to different measured objects.</h4><h4>In summary, this dissertation contributed to high-speed, large depth-of-field, and automated 3D imaging, which can be used in micro-scale applications from the following aspects: (1) enlarging the DOF of the microscopic 3D imaging using the focus stacking technique; (2) developing methods to improve the speed of large DOF microscopic 3D imaging; and (3) developing a method to improve the efficiency of the focus stacking under certain circumstances. These contributions can potentially enable the structured-light 3D imaging technique to be an alternative 3D microscopy approach for many academic studies and industry applications.</h4><p></p>
5

Critical point behaviour in binary and ternary liquid mixtures with particular reference to rheological and interfacial properties in model mixtures for microemulsions

Clements, Patricia J. January 1997 (has links)
The phase behaviour, rheological effects and interfacial properties of binary and ternary liquid mixtures have been studied near critical points. In particular, measurements have been made of the viscosity-at the bulk macroscopic level by capillary viscometry and at the microscopic level by fluorescence depolarisatiorr-and of critical-point wetting and adsorptiorr-at the solid-liquid interface using evanescent-wave-generated fluorescence spectroscopy and at the liquid-vapour interface using specular neutron reflection. The systems investigated have been mostly alkane + perfluoroalkane mixtures or 2-butoxyethanol + H20 or D20 mixtures, although in some cases hexamethyldisiloxane, propanenitrile and perfluorooctyloctane have also been the components of mixtures. The main outcomes of this study are: • Macroscopic viscosity: The divergence to infinity in the shear viscosity of hexane + perfluorohexane at the critical endpoint for approach along the path of constant critical composition both from the single phase and along both limbs of the coexistence curve is described well using the Renormalisation Group Theory critical exponent y = 0.04. The correlation length amplitude obtained by fitting the sheargradient dependence of the viscosity is ~o = (S.S±l.S) A. • Microscopic viscosity: The product of the rotational correlation time and the temperature 'tR"T, often taken as a measure of the microscopic viscosity, exhibits an anomaly as the critical point is approached as a function of temperature. This anomaly mirrors that in the macroscopic viscosity for some fluorescent dye probes, but for others the anomaly is in the opposite sense indicating that other effects such as solvent structure must playa part in the near-critical behaviour of'tR·T. • Critical-point wetting at the solid-liquid interface: The wetting transition temperature has been identified for heptane + perfluorohexane at the quartz-liquid interface from fluorescence lifetime measurements of a probe. The wetting layer is of the same composition as the bulk heptane-rich phase and the transition is tentatively identified as first-order. • Adsorption and wetting at the liquid-vapour interface: The surface structure of several mixtures has been determined by neutron reflection. The results are in general agreement with the expectations of critical-point wetting and adsorption. The surface is complex and in some mixtures an oscillatory scattering length density profile through the interface is required to model the reflectivity data. • Ternary mixtures: The phase behaviour of three mixtures exhibiting tunnel phase behaviour has been studied experimentally and various characteristics of the shape of the twmel identified. A theoretical study on one of the mixtures predicts the drop in temperature for the locus of maximum phase separation temperatures which is observed experimentally.
6

Facilitation of visual pattern recognition by extraction of relevant features from microscopic traffic data

Fields, Matthew James 15 May 2009 (has links)
An experimental approach to traffic flow analysis is presented in which methodology from pattern recognition is applied to a specific dataset to examine its utility in determining traffic patterns. The selected dataset for this work, taken from a 1985 study by JHK and Associates (traffic research) for the Federal Highway Administration, covers an hour long time period over a quarter mile section and includes nine different identifying features for traffic at any given time. The initial step is to select the most pertinent of these features as a target for extraction and local storage during the experiment. The tools created for this approach, a two-level hierarchical group of operators, are used to extract features from the dataset to create a feature space; this is done to minimize the experimental set to a matrix of desirable attributes from the vehicles on the roadway. The application is to identify if this data can be readily parsed into four distinct traffic states; in this case, the state of a vehicle is defined by its velocity and acceleration at a selected timestamp. A three-dimensional plot is used, with color as the third dimension and seen from a top-down perspective, to initially identify vehicle states in a section of roadway over a selected section of time. This is followed by applying k-means clustering, in this case with k=4 to match the four distinct traffic states, to the feature space to examine its viability in determining the states of vehicles in a time section. The method’s accuracy is viewed through silhouette plots. Finally, a group of experiments run through a decision-tree architecture is compared to the kmeans clustering approach. Each decision-tree format uses sets of predefined values for velocity and acceleration to parse the data into the four states; modifications are made to acceleration and deceleration values to examine different results. The three-dimensional plots provide a visual example of congested traffic for use in performing visual comparisons of the clustering results. The silhouette plot results of the k-means experiments show inaccuracy for certain clusters; on the other hand, the decision-tree work shows promise for future work.
7

Simultaneous calibration of a microscopic traffic simulation model and OD matrix

Kim, Seung-Jun 30 October 2006 (has links)
With the recent widespread deployment of intelligent transportation systems (ITS) in North America there is an abundance of data on traffic systems and thus an opportunity to use these data in the calibration of microscopic traffic simulation models. Even though ITS data have been utilized to some extent in the calibration of microscopic traffic simulation models, efforts have focused on improving the quality of the calibration based on aggregate form of ITS data rather than disaggregate data. In addition, researchers have focused on identifying the parameters associated with car-following and lane-changing behavior models and their impacts on overall calibration performance. Therefore, the estimation of the Origin-Destination (OD) matrix has been considered as a preliminary step rather than as a stage that can be included in the calibration process. This research develops a methodology to calibrate the OD matrix jointly with model behavior parameters using a bi-level calibration framework. The upper level seeks to identify the best model parameters using a genetic algorithm (GA). In this level, a statistically based calibration objective function is introduced to account for disaggregate form of ITS data in the calibration of microscopic traffic simulation models and, thus, accurately replicate dynamics of observed traffic conditions. Specifically, the Kolmogorov-Smirnov test is used to measure the "consistency" between the observed and simulated travel time distributions. The calibration of the OD matrix is performed in the lower level, where observed and simulated travel times are incorporated into the OD estimator for the calibration of the OD matrix. The interdependent relationship between travel time information and the OD matrix is formulated using a Extended Kalman filter (EKF) algorithm, which is selected to quantify the nonlinear dependence of the simulation results (travel time) on the OD matrix. The two test sites are from an urban arterial and a freeway in Houston, Texas. The VISSIM model was used to evaluate the proposed methodologies. It was found that that the accuracy of the calibration can be improved by using disaggregated data and by considering both driver behavior parameters and demand.
8

Facilitation of visual pattern recognition by extraction of relevant features from microscopic traffic data

Fields, Matthew James 10 October 2008 (has links)
An experimental approach to traffic flow analysis is presented in which methodology from pattern recognition is applied to a specific dataset to examine its utility in determining traffic patterns. The selected dataset for this work, taken from a 1985 study by JHK and Associates (traffic research) for the Federal Highway Administration, covers an hour long time period over a quarter mile section and includes nine different identifying features for traffic at any given time. The initial step is to select the most pertinent of these features as a target for extraction and local storage during the experiment. The tools created for this approach, a two-level hierarchical group of operators, are used to extract features from the dataset to create a feature space; this is done to minimize the experimental set to a matrix of desirable attributes from the vehicles on the roadway. The application is to identify if this data can be readily parsed into four distinct traffic states; in this case, the state of a vehicle is defined by its velocity and acceleration at a selected timestamp. A three-dimensional plot is used, with color as the third dimension and seen from a top-down perspective, to initially identify vehicle states in a section of roadway over a selected section of time. This is followed by applying k-means clustering, in this case with k=4 to match the four distinct traffic states, to the feature space to examine its viability in determining the states of vehicles in a time section. The method's accuracy is viewed through silhouette plots. Finally, a group of experiments run through a decision-tree architecture is compared to the kmeans clustering approach. Each decision-tree format uses sets of predefined values for velocity and acceleration to parse the data into the four states; modifications are made to acceleration and deceleration values to examine different results. The three-dimensional plots provide a visual example of congested traffic for use in performing visual comparisons of the clustering results. The silhouette plot results of the k-means experiments show inaccuracy for certain clusters; on the other hand, the decision-tree work shows promise for future work.
9

Investigation of the Implementation of Ramp Reversal at a Diamond Interchange

Wang, Bo 16 December 2013 (has links)
Diamond interchange design has been commonly utilized in United States to facilitate traffic exchange between freeway and frontage roads. Another less common interchange design is X-ramp interchange, which is the reversed version of diamond. The major benefit of X-ramp interchange is that it can keep travelers on the freeway until the downstream exit ramp to avoid going through the intersection. It also has drawbacks such as travelers with cross street destinations will experience more delay. This study focuses on when the ramp reversal is desirable. To compare the diamond and X-ramp design, an experimental design is conducted using Latin Hypercube Design method. Four varying factors include interchange design type, traffic volume on the frontage road, through movement percentage and saturation rate of the intersection. 40 scenarios are generated for simulation study using Synchro and VISSIM. Based on the simulation study, optimal signal timing strategies are recommended for each type of interchange design under various traffic conditions. Also, ramp reversal is found closely related to the following factors such as interchange frequency, upstream interchange design, traffic volume on frontage road, through movement percentage and intersection saturation rate. Conclusions are made on when X-ramp is better than diamond interchange design. At last, future research directions are recommended.
10

Technical investigation of the materials and methods utilized in a copy of a 17th century Dutch genre painting Gerrit Dou's "Man interrupted at his writing" (1635) /

Norbutus, Amanda J. January 2008 (has links)
Thesis (M.S.)--Villanova University, 2008. / Chemistry Dept. Includes bibliographical references.

Page generated in 0.0695 seconds