• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 303
  • Tagged with
  • 303
  • 303
  • 303
  • 32
  • 28
  • 26
  • 20
  • 18
  • 16
  • 16
  • 16
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Validation of Volumetric Contact Dynamics Models

Boos, Michael January 2011 (has links)
A volumetric contact dynamics model has been proposed by Gonthier et al. [1, 2, 3] for the purpose of rapidly generating reliable simulations of space-based manipulator contact dynamics. By assuming materials behave as a Winkler elastic foundation model, forces and moments between two bodies in contact can be expressed in terms of the volume of interference between the undeformed geometries of the bodies. Friction between bodies is modelled by a dwell-time dependent bristle model for both tangential friction, and spinning friction torque. This volumetric model has a number of advantages. Unlike point-contact models, it allows for the modelling of contact between complex geometries and scenarios where the contact surface is relatively large, while being less computationally expensive than finite element methods. Rolling resistance is included in the model through damping effects across the volume of interference. The friction model accounts for dwell-time dependent slip-stick effects, spinning friction torque, and the Contensou effect. In this thesis, an experimental validation of the volumetric contact model is presented for the first time. Models for simple geometries in contact (e.g. cylinder-on-plane, sphere-on-plane) have been developed for stationary contact and for contact with motion normal and tangential to the contact surface. Tangential motion is modelled with pure translation, pure rotation about the normal axis, and combined motion, in order to separately consider friction forces, spinning friction torque, and the Contensou effect, respectively. An apparatus has been developed to experimentally validate these models for metal-on-metal contact. The apparatus has two configurations, one for validating the normal contact models and the other for the friction models. Experimental measurements of forces and displacements are used to identify model parameters (e.g. volumetric stiffness, friction coefficients, etc.). For normal force experiments, modelling the contact forces as proportional to volume of interference was found to be a reasonable approximation. A Hertzian model was compared with the volumetric model for spherical payloads loaded quasi-statically. Using stiffnesses estimated from spherical experiments, small misalignments of the cylindrical payloads were estimated that corresponded well with measured results. Dynamic experiments suggest an inverse relationship between impact velocity and the hysteretic damping coefficient. The high normal forces applied in the friction experiments were found to create significant wear on the contact surfaces. Coefficients of friction between titanium and aluminum were found to be consistent translationally and rotationally. Friction forces from combined translation and rotation demonstrate that the Contensou effect is accurately described by the volumetric contact model.
132

Facilitating Brownfield Redevelopment Projects: Evaluation, Negotiation, and Policy

Wang, Qian January 2011 (has links)
A risky project evaluation technique called the fuzzy real options analysis is developed to evaluate brownfield redevelopment projects. Other decision making techniques, such as multiple criteria analysis and conflict analysis, can be incorporated into fuzzy real options analysis to facilitate negotiations on brownfield redevelopment among decision makers (DMs). The value of managerial flexibility, which is important in negotiations and policy making for brownfield redevelopment, is overlooked when the traditional evaluation method, net present value (NPV), is employed. Findings of this thesis can be used to promote brownfield redevelopment, thereby helping to eliminate environmental threats and enhance regional sustainability. A brownfield is an abandoned or underutilized property that contains, or may contain, pollutants, hazardous substances, or contaminants from previous usage, typically industrial activity. Brownfields often occur when the local economy transits from industrial to service-oriented seeking more profit. Governments actively promote brownfield redevelopment to eliminate public health threats, help economic transition, and enhance sustainability. However, developers are reluctant to participate in brownfield redevelopment because they often regard these projects as unprofitable when using classic evaluation techniques. On the other hand, case studies show that brownfield redevelopment projects can be good business opportunities for developers. An improved evaluation method is developed in order to estimate the value of a brownfield more accurately. The main reason that makes the difference between estimates and ''actual'' values lies in the failure of the deterministic project evaluation tool to price the value of uncertainty, which leads to efforts to enhance the decision making under uncertainty. Real options modelling, which extends the ability of option pricing models in real asset evaluation, is employed in risky project evaluation because of its capacity to handle uncertainties. However, brownfield redevelopment projects contain uncertain factors that have no market price, thus violating the assumption of option pricing models for which all risks have been reflected in the market. This problem, called private risk, is addressed by incorporating fuzzy numbers into real options in this thesis, which can be called fuzzy real options. Fuzzy real options are shown to generalize the original model to deal with additional kinds of uncertainties, making them more suitable for project evaluation. A numerical technique based on hybrid variables is developed to price fuzzy real options. We proposed an extension of Least Squares Monte-Carlo simulation (LSM) that produces numerical evaluations of options. A major advantage of this methodology lies in its ability to produce results regardless of whether or not an analytic solution exists. Tests show that the generalized LSM produces similar results to the analytic valuation of fuzzy real options, when this is possible. To facilitate parameter estimation for the fuzzy real options model, another numerical method is proposed to represent the likelihood of contamination of a brownfield using fuzzy boundaries. Linguistic quantifiers and ordered weighted averaging (OWA) techniques are utilized to determine the likelihood of pollution at sample locations based on multiple environmental indicators, acting as a fuzzy deduction rule to calculate the triangle membership functions of the fuzzy parameters. Risk preferences of DMs are expressed as different ''ORness'' levels of OWA operators, which affect likelihood estimates. When the fuzzy boundaries of a brownfield are generated by interpolation of sample points, the parameters of fuzzy real options, drift rate and volatility, can be calculated as fuzzy numbers. Hence, this proposed method can act as an intermediary between DMs and the fuzzy real options models, making this model much easier to apply. The values of DMs to a brownfield can be input to the graph model for conflict resolution (GMCR) to identify possible resolutions during brownfield redevelopment negotiation among all possible states, or combinations of DMs' choices. Major redevelopment policies are studied using a brownfield redevelopment case, Ralgreen Community in Kitchener, Ontario, Canada. The fuzzy preference framework and probability-based comparison method to rank fuzzy variables are employed to integrate fuzzy real options and GMCR. Insights into this conflict and general policy suggestions are provided. A potential negotiation support system (NSS) implementing these numerical methods is discussed in the context of negotiating brownfield redevelopment projects. The NSS combines the computational modules, decision support system (DSS) prototypes, and geographic information systems (GIS), and message systems. A public-private partnership (PPP) will be enhanced through information sharing, scenario generation, and conflict analysis provided by the NSS, encouraging more efficient brownfield redevelopment and leading to greater regional sustainability. The integrated usage of fuzzy real options, OWA, and GMCR takes advantage of fuzziness and randomness, making better evaluation technique available in a multiple DMs negotiation setting. Decision techniques expand their range from decision analysis, multiple criteria analysis, to a game-theoretic approach, contributing to a big picture on decision making under uncertainty. When these methods are used to study brownfield redevelopment, we found that creating better business opportunities, such as allowing land use change to raise net income, are more important in determining equilibria than remediation cost refunding. Better redevelopment policies can be proposed to aid negotiations among stakeholders.
133

Reconstruction of 3D Points From Uncalibrated Underwater Video

Cavan, Neil January 2011 (has links)
This thesis presents a 3D reconstruction software pipeline that is capable of generating point cloud data from uncalibrated underwater video. This research project was undertaken as a partnership with 2G Robotics, and the pipeline described in this thesis will become the 3D reconstruction engine for a software product that can generate photo-realistic 3D models from underwater video. The pipeline proceeds in three stages: video tracking, projective reconstruction, and autocalibration. Video tracking serves two functions: tracking recognizable feature points, as well as selecting well-spaced keyframes with a wide enough baseline to be used in the reconstruction. Video tracking is accomplished using Lucas-Kanade optical flow as implemented in the OpenCV toolkit. This simple and widely used method is well-suited to underwater video, which is taken by carefully piloted and slow-moving underwater vehicles. Projective reconstruction is the process of simultaneously calculating the motion of the cameras and the 3D location of observed points in the scene. This is accomplished using a geometric three-view technique. Results are presented showing that the projective reconstruction algorithm detailed here compares favourably to state-of-the-art methods. Autocalibration is the process of transforming a projective reconstruction, which is not suitable for visualization or measurement, into a metric space where it can be used. This is the most challenging part of the 3D reconstruction pipeline, and this thesis presents a novel autocalibration algorithm. Results are shown for two existing cost function-based methods in the literature which failed when applied to underwater video, as well as the proposed hybrid method. The hybrid method combines the best parts of its two parent methods, and produces good results on underwater video. Final results are shown for the 3D reconstruction pipeline operating on short under- water video sequences to produce visually accurate 3D point clouds of the scene, suitable for photorealistic rendering. Although further work remains to extend and improve the pipeline for operation on longer sequences, this thesis presents a proof-of-concept method for 3D reconstruction from uncalibrated underwater video.
134

Laser Interference Fringe Tomography - A Novel 3D Imaging Microscopy Technique

Kazemzadeh, Farnoud January 2011 (has links)
Laser interference fringe tomography (LIFT) is within the class of optical imaging devices designed for volumetric microscope applications. LIFT is a very simple and cost-effective three-dimensional imaging device which is able to reliably produce low-quality imagery. It measures the reflectivity as a function of depth within a sample and is capable of producing three-dimensional images from optically scattering surfaces. The first generation of this instrument is designed and prototyped for optical microscopy. With an imaging spot size of 42 μm and a 180 μm axial resolution kernel, LIFT is capable of producing one- and two- dimensional images of various samples up to 1.5 mm thickness. The prototype was built using commercial-off-the-shelf components and cost ~ $1,000. It is possible that with effort, this device can become a reliable, stable, low-quality volumetric imaging microscope to be readily available to the consumer market at a very affordable price. This document will present the optical design of LIFT along with the complete mathematical description of the instrument. The design trade-offs and choices of the instrument are discussed in detail and justified. The theoretical imaging capabilities of the instrument are tested and experimentally verified. Finally, some imaging results are presented and discussed.
135

Illumination and Noise-Based Scene Classification - Application to SAR Sea Ice Imagery

Bandekar, Namrata 16 January 2012 (has links)
Spatial intensity variation introduced by illumination changes is a challenging problem for image segmentation and classification. Many techniques have been proposed which focus on removing this illumination variation by estimating or modelling it. There is limited research on developing an illumination invariant classification technique which does not use any preprocessing. A major focus of this research is on automatically classifying synthetic aperture radar (SAR) images. These are large satellite images which pose many challenges for image classification including the incidence angle effect which is a strong illumination variation across the image. Mapping of full scene satellite images of sea-ice is important for navigational purposes for ships and also for climate research. The images obtained from the RADARSAT-2 satellite are dual band, high quality images. Currently, sea ice chart are produced manually by ice analysts at the Canadian Ice Service. However, this process can be automated to reduce processing time and obtain more detailed pixel-level ice maps. An automated classification algorithm to achieve sea ice and open water separation will greatly help the ice analyst by providing sufficient guidance in the initial stages of creating an ice map. It would also help the analyst to improve the accuracy while finding ice concentrations and remove subjective bias. The existing Iterative Region Growing by Semantics (IRGS) algorithm is not effective for full scene segmentation because of the incidence angle effect. This research proposes a "glocal" (global as well as local) approach to solve this problem. The image is divided in a rectangular grid and each rectangle is segmented using IRGS. This is viewed as an over-segmentation of the original image. Finally, IRGS is used globally to glue together the over-segmented regions. This method yields acceptable results with the denoised images. The proposed technique can also be used for general image classification purposes. Extensive testing was done to investigate the best set of parameters for the proposed approach. Images were simulated with the SAR illumination variation and multiplicative speckle noise. The technique was effective for general classification and attained accurate results for full scene SAR segmentation.
136

Conflicting Attitudes in Environmental Management and Brownfield Redevelopment

Walker, Sean 07 May 2012 (has links)
An enhanced attitudes methodology within the framework of the Graph Model for Conflict Resolution (GMCR) is developed and applied to a range of environmental disputes, including a sustainable development conflict, an international climate change negotiation and a selection of brownfield conflicts over a proposed transfer of ownership. GMCR and the attitudes framework are first defined and then applied to a possible Sino-American climate negotiation over reductions in greenhouse gas emissions. A formal relationship between the attitudes framework and relative preferences is defined and associated mathematical theorems, which relate the moves and solution concepts used in both types of analysis, are proven. Significant extensions of the attitudes methodology are devised in the thesis. The first, dominating attitudes is a methodology by which the importance of a decision maker’s (DM’s) attitudes can be used to evaluate the strength of a given state stability. The second, COalitions and ATtitudes (COAT), is an expansion of both the attitudes and coalitions frameworks which allows one to analyze the impact of attitudes within a collaborative decision making setting. Finally, the matrix form of attitudes, is a mathematical methodology which allows complicated solution concepts to be executed using matrix operations and thus make attitudes more adaptable to a coding environment. When applied to environmental management conflicts, these innovative expansions of the attitudes framework illustrate the importance of cooperation and diplomacy in environmental conflict resolution.
137

Frozen-State Hierarchical Annealing

Campaigne, Wesley January 2012 (has links)
There is significant interest in the synthesis of discrete-state random fields, particularly those possessing structure over a wide range of scales. However, given a model on some finest, pixellated scale, it is computationally very difficult to synthesize both large and small-scale structures, motivating research into hierarchical methods. This thesis proposes a frozen-state approach to hierarchical modelling, in which simulated annealing is performed on each scale, constrained by the state estimates at the parent scale. The approach leads significant advantages in both modelling flexibility and computational complexity. In particular, a complex structure can be realized with very simple, local, scale-dependent models, and by constraining the domain to be annealed at finer scales to only the uncertain portions of coarser scales, the approach leads to huge improvements in computational complexity. Results are shown for synthesis problems in porous media.
138

A mechanistic model of motion processing in the early visual system

Hurzook, Aziz 23 November 2012 (has links)
A prerequisite for the perception of motion in primates is the transformation of varying intensities of light on the retina into an estimation of position, direction and speed of coherent objects. The neuro-computational mechanisms relevant for object feature encoding have been thoroughly explored, with many neurally plausible models able to represent static visual scenes. However, motion estimation requires the comparison of successive scenes through time. Precisely how the necessary neural dynamics arise and how other related neural system components interoperate have yet to be shown in a large-scale, biologically realistic simulation. The proposed model simulates a spiking neural network computation for representing object velocities in cortical areas V1 and middle temporal area (MT). The essential neural dynamics, hypothesized to reside in networks of V1 simple cells, are implemented through recurrent population connections that generate oscillating spatiotemporal tunings. These oscillators produce a resonance response when stimuli move in an appropriate manner in their receptive fields. The simulation shows close agreement between the predicted and actual impulse responses from V1 simple cells using an ideal stimulus. By integrating the activities of like V1 simple cells over space, a local measure of visual pattern velocity can be produced. This measure is also the linear weight of an associated velocity in a retinotopic map of optical flow. As a demonstration, the classic motion stimuli of drifting sinusoidal gratings and variably coherent dots are used as test stimuli and optical flow maps are generated. Vector field representations of this structure may serve as inputs for perception and decision making processes in later brain areas.
139

The Application of FROID in MR Image Reconstruction

Vu, Linda January 2010 (has links)
In magnetic resonance imaging (MRI), sampling methods that lead to incomplete data coverage of k-space are used to accelerate imaging and reduce overall scan time. Non-Cartesian sampling trajectories such as radial, spiral, and random trajectories are employed to facilitate advanced imaging techniques, such as compressed sensing, or to provide more efficient coverage of k-space for a shorter scan period. When k-space is undersampled or unevenly sampled, traditional methods of transforming Fourier data to obtain the desired image, such as the FFT, may no longer be applicable. The Fourier reconstruction of optical interferometer data (FROID) algorithm is a novel reconstruction method developed by A. R. Hajian that has been successful in the field of optical interferometry in reconstructing images from sparsely and unevenly sampled data. It is applicable to cases where the collected data is a Fourier representation of the desired image or spectrum. The framework presented allows for a priori information, such as the positions of the sampled points, to be incorporated into the reconstruction of images. Initially, FROID assumes a guess of the real-valued spectrum or image in the form of an interpolated function and calculates the corresponding integral Fourier transform. Amplitudes are then sampled in the Fourier space at locations corresponding to the acquired measurements to form a model dataset. The guess spectrum or image is then adjusted such that the model dataset in the Fourier space is least squares fitted to measured values. In this thesis, FROID has been adapted and implemented for use in MRI where k-space is the Fourier transform of the desired image. By forming a continuous mapping of the image and modelling data in the Fourier space, a comparison and optimization with respect to data acquired in k-space that is either undersampled or irregularly sampled can be performed as long as the sampling positions are known. To apply FROID to the reconstruction of magnetic resonance images, an appropriate objective function that expresses the desired least squares fit criteria was defined and the model for interpolating Fourier data was extended to include complex values of an image. When an image with two Gaussian functions was tested, FROID was able to reconstruct images from data randomly sampled in k-space and was not restricted to data sampled evenly on a Cartesian grid. An MR image of a bone with complex values was also reconstructed using FROID and the magnitude image was compared to that reconstructed by the FFT. It was found that FROID outperformed the FFT in certain cases even when data were rectilinearly sampled.
140

Decoupled Deformable Model For 2D/3D Boundary Identification

Mishra, Akshaya Kumar 07 1900 (has links)
The accurate detection of static object boundaries such as contours or surfaces and dynamic tunnels of moving objects via deformable models is an ongoing research topic in computer vision. Most deformable models attempt to converge towards a desired solution by minimizing the sum of internal (prior) and external (measurement) energy terms. Such an approach is elegant, but frequently mis-converges in the presence of noise or complex boundaries and typically requires careful semi-dependent parameter tuning and initialization. Furthermore, current deformable model based approaches are computationally demanding which precludes real-time use. To address these limitations, a decoupled deformable model (DDM) is developed which optimizes the two energy terms separately. Essentially, the DDM consists of a measurement update step, employing a Hidden Markov Model (HMM) and Maximum Likelihood (ML) estimator, followed by a separate prior step, which modifies the updated deformable model based on the relative strengths of the measurement uncertainty and the non-stationary prior. The non-stationary prior is generated by using a curvature guided importance sampling method to capture high curvature regions. By separating the measurement and prior steps, the algorithm is less likely to mis-converge; furthermore, the use of a non-iterative ML estimator allows the method to converge more rapidly than energy-based iterative solvers. The full functionality of the DDM is developed in three phases. First, a DDM in 2D called the decoupled active contour (DAC) is developed to accurately identify the boundary of a 2D object in the presence of noise and background clutter. To carry out this task, the DAC employs the Viterbi algorithm as a truncated ML estimator, curvature guided importance sampling as a non-stationary prior generator, and a linear Bayesian estimator to fuse the non-stationary prior with the measurements. Experimental results clearly demonstrate that the DAC is robust to noise, can capture regions of very high curvature, and exhibits limited dependence on contour initialization or parameter settings. Compared to three other published methods and across many images, the DAC is found to be faster and to offer consistently accurate boundary identification. Second, a fast decoupled active contour (FDAC) is proposed to accelerate the convergence rate and the scalability of the DAC without sacrificing the accuracy by employing computationally efficient and scalable techniques to solve the three primary steps of DAC. The computational advantage of the FDAC is demonstrated both experimentally and analytically compared to three computationally efficient methods using illustrative examples. Finally, an extension of the FDAC from 2D to 3D called a decoupled active surface (DAS) is developed to precisely identify the surface of a volumetric 3D image and the tunnel of a moving 2D object. To achieve the objectives of the DAS, the concepts of the FDAC are extended to 3D by using a specialized 3D deformable model representation scheme and a computationally and storage efficient estimation scheme. The performance of the DAS is demonstrated using several natural and synthetic volumetric images and a sequence of moving objects.

Page generated in 0.1178 seconds