• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 13
  • 13
  • 13
  • 13
  • 13
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Mapping unstructured mesh codes onto local memory parallel architectures

Jones, Beryl Wyn January 1994 (has links)
Initial work on mapping CFD codes onto parallel systems focused upon software which employed structured meshes. Increasingly, many large scale CFD codes are being based upon unstructured meshes. One of the key problems when implementing such large scale unstructured problems on a distributed memory machine is the question of how to partition the underlying computational domain efficiently. It is important that all processors are kept busy for as large a proportion of the time as possible and that the amount, level and frequency of communication should be kept to a minimum. Proposed techniques for solving the mapping problem have separated out the solution into two distinct phases. The first phase is to partition the computational domain into cohesive sub-regions. The second phase consists of embedding these sub-regions onto the processors. However, it has been shown that performing these two operations in isolation can lead to poor mappings and much less optimal communication time. In this thesis we develop a technique which simultaneously takes account of the processor topology whilst identifying the cohesive sub-regions. Our approach is based on an unstructured mesh decomposition method that was originally developed by Sadayappan et al [SER90] for a hypercube. This technique forms a basis for a method which enables a decomposition to an arbitrary number of processors on a specified processor network topology. Whilst partitioning the mesh, the optimisation method takes into account the processor topology by minimising the total interprocessor communication. The problem with this technique is that it is not suitable for dealing with very large meshes since the calculations often require prodigious amounts of computing processing power.
2

Modelling of liquid droplet dynamics in a high DC magnetic field

Easter, Stuart January 2012 (has links)
The oscillating droplet technique is an experimental technique that is used to measure the surface tension and viscous damping coefficients of a liquid droplet. This technique has been the subject of much analysis; theoretical, numerical, and experimental with a number of different external forces used to confine the droplet. These external forces are found to modify the oscillation frequency and damping rates, which need to be quantified in order for the measurement technique to be used. The dynamics of the droplet are three-dimensional but previous numerical work has largely focused on axisymmetric cases. This work uses numerical techniques to extend the previous analysis to include the full three-dimensional effects. In this work a three-dimensional numerical model is designed, developed and applied to study the dynamics of a liquid droplet both in free space and with a high DC magnetic field used to balance gravitational forces. The numerical model is a grid point formulation of the pseudo-spectral collocation method discretised in a spherical coordinate system with the implicit Euler method used to advance the solution in time. A coordinate transformation method is used to ensure the direct surface tracking required for modelling the surface shape oscillations. The study covers the laminar fluid flow regime within a droplet exhibiting translational and surface shape oscillations providing a greater understanding of the physical behaviour of the droplet along with a qualitative and quantitative comparison with theoretical results. Initially a droplet oscillating in free space is considered, with a range of surface oscillation modes used to demonstrate the three-dimensional dynamics. Then the influence of electromagnetic forces on a diamagnetic droplet is studied, this includes the field from a solenoid magnet used to levitate a diamagnetic droplet. Finally the dynamics of an electrically conducting droplet in an external static magnetic field are modelled. In each case a number of methods are used to analyse the surface displacement in order to determine the surface tension and viscous damping coefficients. The numerical study of a freely oscillating droplet shows good agreement with the low order theoretical results for droplets in the limit of low viscosity. The high accuracy of the surface tracking method allows the non-linear effects of mode coupling and frequency shift with amplitude to be observed. There is good agreement with the theoretical values available for inviscid axisymmetric oscillations and the numerical study provides the opportunity to determine these effects for three-dimensional viscous oscillations. The magnetic field from a solenoid is used to study the levitation of a diamagnetic droplet and the oscillation frequencies of the droplet are compared with a theoretical model. The magnetic field is analysed and the accuracy of the field calculation used when determining the modification to the oscillation frequencies is considered with the use of a theoretical model. Analysis is made into the splitting of the frequency spectrum due to the magnetic field. The theoretical model that is available for an electrically conducting droplet in a static magnetic field predicts different fluid flow within the droplet and oscillation frequency and damping rate changes. These changes are compared qualitatively and quantitatively with the numerical model results with good agreement.
3

A strategy for mapping unstructured mesh computational mechanics programs onto distributed memory parallel architectures

McManus, Kevin January 1996 (has links)
The motivation of this thesis was to develop strategies that would enable unstructured mesh based computational mechanics codes to exploit the computational advantages offered by distributed memory parallel processors. Strategies that successfully map structured mesh codes onto parallel machines have been developed over the previous decade and used to build a toolkit for automation of the parallelisation process. Extension of the capabilities of this toolkit to include unstructured mesh codes requires new strategies to be developed. This thesis examines the method of parallelisation by geometric domain decomposition using the single program multi data programming paradigm with explicit message passing. This technique involves splitting (decomposing) the problem definition into P parts that may be distributed over P processors in a parallel machine. Each processor runs the same program and operates only on its part of the problem. Messages passed between the processors allow data exchange to maintain consistency with the original algorithm. The strategies developed to parallelise unstructured mesh codes should meet a number of requirements: The algorithms are faithfully reproduced in parallel. The code is largely unaltered in the parallel version. The parallel efficiency is maximised. The techniques should scale to highly parallel systems. The parallelisation process should become automated. Techniques and strategies that meet these requirements are developed and tested in this dissertation using a state of the art integrated computational fluid dynamics and solid mechanics code. The results presented demonstrate the importance of the problem partition in the definition of inter-processor communication and hence parallel performance. The classical measure of partition quality based on the number of cut edges in the mesh partition can be inadequate for real parallel machines. Consideration of the topology of the parallel machine in the mesh partition is demonstrated to be a more significant factor than the number of cut edges in the achieved parallel efficiency. It is shown to be advantageous to allow an increase in the volume of communication in order to achieve an efficient mapping dominated by localised communications. The limitation to parallel performance resulting from communication startup latency is clearly revealed together with strategies to minimise the effect. The generic application of the techniques to other unstructured mesh codes is discussed in the context of automation of the parallelisation process. Automation of parallelisation based on the developed strategies is presented as possible through the use of run time inspector loops to accurately determine the dependencies that define the necessary inter-processor communication.
4

Mesh generation by domain bisection

Lawrence, Peter James January 1994 (has links)
The research reported in this dissertation was undertaken to investigate efficient computational methods of automatically generating three dimensional unstructured tetrahedral meshes. The work on two dimensional triangular unstructured grid generation by Lewis and Robinson [LeR76] is first examined, in which a recursive bisection technique of computational order nlog(n) was implemented. This technique is then extended to incorporate new methods of geometry input and the automatic handling of multiconnected regions. The method of two dimensional recursive mesh bisection is then further modified to incorporate an improved strategy for the selection of bisections. This enables an automatic nodal placement technique to be implemented in conjunction with the grid generator. The dissertation then investigates methods of generating triangular grids over parametric surfaces. This includes a new definition of surface Delaunay triangulation with the extension of grid improvement techniques to surfaces. Based on the assumption that all surface grids of objects form polyhedral domains, a three dimensional mesh generation technique is derived. This technique is a hybrid of recursive domain bisection coupled with a min-max heuristic triangulation algorithm. This is done to achieve a computationlly efficient and reliable algorithm coupled with a fast nodal placement technique. The algorithm generates three dimensional unstructured tetrahedral grids over polyhedral domains with multi-connected regions in an average computational order of less than nlog(n).
5

An investigation into automation of fire field modelling techniques

Taylor, Stephen John January 1997 (has links)
The research described in this thesis has produced a prototype system based on fire field modelling techniques for use by members of the Fire Safety Engineering community who are not expert in modelling techniques. The system captures the qualitative reasoning of an experienced modeller in the assessment of room geometries in order to setup important initial parameters of the problem. The prototype system is based on artificial intelligence techniques, specifically expert system technology. It is implemented as a case based reasoning (CBR) system, primarily because it was discovered that the expert uses case based reasoning when manually dealing with such problems. The thesis answers three basic research questions. These are organised into a primary question and two subsidiary questions. The primary question is: how can CFD setup for fire modelling problems be automated? From this, the two subsidiary questions are concerned with how to represent the qualitative and quantitative knowledge associated with fire modelling; and selection of the most appropriate method of knowledge storage and retrieval. The thesis describes how knowledge has been acquired and represented for the system, pattern recognition issues, the methods of knowledge storage and retrieval chosen, the implementation of the prototype system and validation. Validation has shown that the system models the expert’s knowledge in a satisfactory way and that the system performs competently when faced with new problems. The thesis concludes with a section regarding new research questions arising from the research, and the further work these questions entail.
6

A domain independent adaptive imaging system for visual inspection

Panayiotou, Stephen January 1995 (has links)
Computer vision is a rapidly growing area. The range of applications is increasing very quickly, robotics, inspection, medicine, physics and document processing are all computer vision applications still in their infancy. All these applications are written with a specific task in mind and do not perform well unless there under a controlled environment. They do not deploy any knowledge to produce a meaningful description of the scene, or indeed aid in the analysis of the image. The construction of a symbolic description of a scene from a digitised image is a difficult problem. A symbolic interpretation of an image can be viewed as a mapping from the image pixels to an identification of the semantically relevant objects. Before symbolic reasoning can take place image processing and segmentation routines must produce the relevant information. This part of the imaging system inherently introduces many errors. The aim of this project is to reduce the error rate produced by such algorithms and make them adaptable to change in the manufacturing process. Thus a prior knowledge is needed about the image and the objects they contain as well as knowledge about how the image was acquired from the scene (image geometry, quality, object decomposition, lighting conditions etc,). Knowledge on algorithms must also be acquired. Such knowledge is collected by studying the algorithms and deciding in which areas of image analysis they work well in. In most existing image analysis systems, knowledge of this kind is implicitly embedded into the algorithms employed in the system. Such an approach assumes that all these parameters are invariant. However, in complex applications this may not be the case, so that adjustment must be made from time to time to ensure a satisfactory performance of the system. A system that allows for such adjustments to be made, must comprise the explicit representation of the knowledge utilised in the image analysis procedure. In addition to the use of a priori knowledge, rules are employed to improve the performance of the image processing and segmentation algorithms. These rules considerably enhance the correctness of the segmentation process. The most frequently given goal, if not the only one in industrial image analysis is to detect and locate objects of a given type in the image. That is, an image may contain objects of different types, and the goal is to identify parts of the image. The system developed here is driven by these goals, and thus by teaching the system a new object or fault in an object the system may adapt the algorithms to detect these new objects as well compromise for changes in the environment such as a change in lighting conditions. We have called this system the Visual Planner, this is due to the fact that we use techniques based on planning to achieve a given goal. As the Visual Planner learns the specific domain it is working in, appropriate algorithms are selected to segment the object. This makes the system domain independent, because different algorithms may be selected for different applications and objects under different environmental condition.
7

Quantitative studies of the structure and chemistry of materials at the nano- and atomic-scale

Bigatti, Marco January 2015 (has links)
In this thesis electron microscopy was employed to characterise the nanoscale and atomic scale structure and chemistry of organic and inorganic materials. In chapter 4, the thin film morphology of the organic blend of [poly(9,9-dioctylfluorene- co-benzothiadiazole)] (commonly referred as F8BT) and poly[9,9-dioctyfluorene-co- N-(4-butylphenyl)-diphenylamine] (abbreviated as TFB) was investigated, mainly by bright field transmission electron microscopy (BF-TEM). F8BT and TFB are conjugated polymers, which are candidates to replace inorganic semiconductors in many applications because of their simple preparation and processing procedures. The phase separation of the F8BT:TFB blend was investigated at different compositions. Polymer domains were found in the thin film, with sub- micrometer size which varies with concentration. The 1:1 weight ratio sample showed sub-micrometer TFB rich areas in a F8BT matrix, while the 1:4 weight ratio thin film presented F8BT phases, whose areas are mostly below 0.02 μm2, in a TFB layer. Since some electronic applications, especially in optoelectronics, show increased efficiency after addition of quantum dots in the polymer blend, the effect of CdSe quantum dots on the phase separation of the organic blend was investigated together with their effect on the nanoscale morphology. The CdSe quantum dots were found to aggregate in clusters with limited dispersion within the polymer domains, which did not present significantly morphology changes as a consequence of quantum dots (QDs) addition. The atomic structure and chemistry of the inorganic Ba6−3xNd8+2xTi18O54 microwave ceramic was quantitatively investigated in chapter 4, using high resolution scanning transmission electron microscopy (HR-STEM) and electron energy loss spectroscopy (EELS). These materials are an essential part of telecommunication systems, they can be found in components such as resonators and antennas, on account of their high permittivity, temperature stability and the very low dielectric loss at microwave frequencies. The unit cell was refined with sub-Å precision based on extensive data analysis of HR-STEM images and the unit cell structure showed no significant changes as a consequence of changes in composition or cooling rate after annealing. Ba was found to substitute preferentially to specific Nd atomic columns in the structure, and these trends apply across the whole composition range. These results were confirmed by comparisons with image simulations and provided a starting point for improved refinements of X-ray data.
8

Transmission electron tomography : quality assessment and enhancement for three-dimensional imaging of nanostructures

Al-afeef, Ala' January 2016 (has links)
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
9

Theory and modelling of wavelength tunable laser transmitters with enhanced tuning range and their modulation performance

Kyritsis, Georgios January 2015 (has links)
The research that is described in detail in this thesis investigates key characteristics of the operation of Tunable Laser Diodes (TLDs), such as Continuous Wave (CW) operation, discontinuous, continuous and quasicontinuous wavelength tuning and direct Intensity Modulation (IM) (small-signal analysis). Two software simulation tools were used to model the TLDs and investigate their operation, Crosslight PICS3D and VPI (Virtual Photonics Incorporated). Two different Free-Carrier (FC) contributions to the refractive index change of the TLD during FC tuning were investigated, the FC plasma effect and the band-filling effect which uses the Kramers-Kronig (KK) relations (KK effect). It was found that the band-filling effect is heavily underestimated due to the lack of its investigation in published literature as it is the main contributor to the refractive index change instead of the plasma effect. Investigation on different types of wavelength tuning also took place. It was found that with careful design of the passive sections, such as the κL product, grating composition, section length and passive waveguide thickness the discontinuous, continuous and quasicontinuous tuning range can be enhanced greatly. The issue of output power decrease during discontinuous tuning in bulk and Multiple Quantum Well (MQW) TLDs was also addressed and it was found that the power drop can be delayed at latter stages of the tuning range by carefully selecting the Lorentzian lineshape of the gain spectrum. A power stabilisation was realised with continuous tuning. A small-signal analysis of directly intensity modulated TLDs during discontinuous tuning was also made and was found that the increase of the resonance frequency depends mainly on the increase of the differential gain with the wavelength change.
10

An object-based analysis of cloud motion from sequences of METEOSAT satellite data

Newland, Franz Thomas January 1999 (has links)
The need for wind and atmospheric dynamics data for weather modelling and forecasting is well founded. Current texture-based techniques for tracking clouds in sequences of satellite imagery are robust at generating global cloud motion winds, but their use as wind data makes many simplifying assumptions on the causal relationships between cloud dynamics and the underlying windfield. These can be summarised under the single assumption that clouds must act as passive tracers for the wind. The errors thus introduced are now significant in light of the improvements made to weather models and forecasting techniques since the first introduction of satellite-derived wind information in the late 1970s. In that time, the algorithms used to track cloud in satellite imagery have not changed fundamentally. There is therefore a need to address the simplifying assumptions and to adapt the nature of the analyses applied accordingly. A new approach to cloud motion analysis from satellite data is introduced in this thesis which tracks the motion of clouds at different scales, making it possible to identify and understand some of the different transport mechanisms present in clouds and remove or reduce the dependence on the simplifying assumptions. Initial work in this thesis examines the suitability of different motion analysis tools for determining the motion of the cloud content in the imagery using a fuzzy system. It then proposes tracking clouds as flexible structures to analyse the motion of the clouds themselves, and using the nature of cloud edges to identify the atmospheric flow around the structures. To produce stable structural analyses, the cloud data are initially smoothed. A novel approach using morphological operators is presented that maintains cloud edge gradients whilst maximising coherence in the smoothed data. Clouds are analysed as whole structures, providing a new measure of synoptic-scale motion. Internal dynamics of the cloud structures are analysed using medial axis transforms of the smoothed data. Tracks of medial axes provide a new measure of cloud motion at a mesoscale. The sharpness in edge gradient is used as a new measure to identify regions of atmospheric flow parallel to a cloud edge (jet flows, which cause significant underestimation in atmospheric motion under the present approach) and regions where the flow crosses the cloud boundary. The different motion characteristics displayed by the medial axis tracks and edge information provide an indication of the atmospheric flow at different scales. In addition to generating new parameters for measuring cloud and atmospheric dynamics, the approach enables weather modellers and forecasters to identify the scale of flow captured by the currently used cloud tracers (both satellite-derived and from other sources). This would allow them to select the most suitable tracers for describing the atmospheric dynamics at the scale of their model or forecast. This technique would also be suitable for any other fluid flow analyses where coherent and stable gradients persist in the flow, and where it is useful to analyse the flow dynamics at more than one scale.

Page generated in 0.0806 seconds