181 |
Modelling and nonlinear analysis of aircraft ground manoeuvresCoetzee, Etienne January 2011 (has links)
No description available.
|
182 |
Digital prototyping of a dental articular simulator to test prosthetic componentsWang, Lin January 2010 (has links)
No description available.
|
183 |
Reconstruction of 3D Solid Models from Orthographic ProjectionsWang, Zhe January 2010 (has links)
No description available.
|
184 |
Reconstruction of an Arbitrary View of a Moving Scene from Multiple Video InputsSmith, Timothy M. A. January 2009 (has links)
No description available.
|
185 |
Incorporating Higher Level Structure in Visual SLAMGee, Andrew P. January 2010 (has links)
No description available.
|
186 |
Building an eScience infrastructure for environmental scienceChiang, G.-T. January 2010 (has links)
The objective of this project is to build an eScience/grid infrastructure suitable for use with environmental sciences and especially with hydrological science. The infrastructure allows a wide range of hydrological problems to be investigated and is particularly suitable for either computationally intensive or multiple scenario applications. To accomplish this objective, this project discovered the shortcomings of current grid infrastructures for hydrological science and developed missing components to fill this gap. In particular, there were three primary areas which needed work; firstly, integrating data and computing grids; secondly, visualization of geographic information from grid outputs; and thirdly, implementing hydrological simulations based on this infrastructure. A grid infrastructure, which consists of a computing and a data grid, has been built. In addition, the computing grid has been extended to utilize the Amazon cloud computing resources. Users can implement a complete simulation job life cycle form job submission, and data management to metadata management based on this infrastructure. In order to deal with the visualization and metadata within the grid, XMLization is used in this project. I developed a Writing Keyhole Markup Language (WKML), which is a Fortran library allowing enviornemntal scientists to visualize their model outputs in Google Earth. I have also developed a Writing Hydrological Markup Language (WHML) to describe the hydrological data. Finally, an XPath-based tool integrated with RMCS has been developed to extract metadata from XML files. A hydrological scientific pilot project tries to discover whether SHETRAN modelling could be used to predict hydrological behaviour at downstream. The outcomes proved that the SHETRAN synthetic Flood Frequency Curves (FFCs) suggest that simple short-term modelling can be extrapolated to estimate the impact on FFCs of changes in land use/management.
|
187 |
A creative process for material selection and technology coupling in product designJohnson, K. January 2002 (has links)
In this thesis, a subset of materials and products have been considered - those of sports equipment. First the relevant information about each is explored and structured in a database. This information is presented in a standard profile - a description of the character of a material - or as a 'map' of the space of materials - a visualisation of material behaviour. Two-dimensional maps displaying pairs of technical properties of materials are well known. In multiple dimensions, a completely new technique is required, that of multi-dimensional scaling. This is a technique for revealing 'similarities' and relationships in complex data sets; the full attribute profile of a material is precisely such a set. Both two-dimensional and multi-dimensional maps are explored as tools for product design. For visualising similarities between lightweight structural products a second technique is required, that of mapping material properties (like elastic modulus and yield strength) with specific geometries (like second moment of area and section modulus). Here, these maps are used to compare similar functional elements in specific products and their relationship to material possibilities. The intention of this thesis is to develop an understanding of the interaction of materials and design (MAD). These interactions require new methods of selection which are then demonstrated in a set of design tools: one for material selection and a second for structured inspiration. For each, a database of materials is linked to one of products. With this process, two methods of material selection, based on the visualisation techniques described earlier, are integrated with those of <i>analysis - selection by similarity </i>and <i>selection by synthesis</i>. In selection by similarity, the set of possible material solutions is expanded by comparing materials based on technical or aesthetic attributes. In selection by synthesis, innovative material solutions are created by assembling elements from those that are found in existing products. The same database is manipulated by a digital viewer for structured inspiration - for this, a series of images of materials are presented to the designer in random order. The viewer provides a visual stimulus for material selection and the designer is immediately linked to the appropriate information if requested - <i>selection by inspiration</i>. The MAD process of material selection requires the creative combination of each of these methods.
|
188 |
Film and video restoration using rank-order modelsArmstrong, S. January 1999 (has links)
This thesis develops a formalism for an image model that is based on rank-order operators. More commonly used as filters, the rank-order operators are here employed as predictors. A Laplacian excitation sequence is chosen to complete the model. Images are generated with the model and compared with those formed with an AR model. A multidimensional rank-order model is formed from vector medians for use with multidimensional image data. The first application using the rank-order model is an impulsive noise detector. This exploits the notion of 'multimodality' in the histogram of a difference image of the degraded image and a rank-order filtered version. It uses the EM algorithm and a mixture model to automatically determine thresholds for detecting the impulsive noise. This method compares well with other detection methods, which require manual setting of thresholds, and to stack filtering, which requires an undegraded training sequence. The impulsive noise detector is developed further to detect and remove degradation caused by scratches on 2-inch video tape. Additional techniques are developed to correct other defects such as line jitter and line fading. The second half of the thesis is concerned with reconstructing missing regions in images and image sequences. First of all an interpolation method is developed based on rank-order predictors. This proves to be very computationally intensive, but the rank-order structure is shown to reconstruct image features well, doing remarkably well on edges. A method using the Gibbs sampler for reconstructing missing data in images is developed and results show that convergence is very rapid. Motion estimation and automatic detection of missing data is added to produce a method for automatically detecting and reconstructing missing data in image sequences. Reconstructions are of a very high quality and the method compares very well with similar AR based sampling methods. Improved reconstruction of edges is again observed.
|
189 |
Selective mesh refinement for renderingBrown, P. J. C. January 1998 (has links)
A key task in computer graphics is the rendering of complex models. As a result, there exist a large number of schemes for improving the speed of the rendering process, many of which involve displaying only a simplified version of a model. When such a simplification is generated selectively, i.e. detail is only removed in specific regions of a model, we term this <I>selective mesh refinement.</I> Selective mesh refinement can potentially produce a model approximation which can be displayed at greatly reduced cost while remaining perceptually equivalent to a rendering of the original. For this reason, the field of selective mesh refinement has been the subject of dramatically increased interest recently. The resulting selective refinement methods, though, are restricted in both the types of model which they can handle and the form of output meshes which they can generate. Our primary thesis is that a selectively refined mesh can be produced by combining fragments of approximations to a model without regard to the underlying approximation method. Thus we can utilise existing approximation techniques to produce selectively refined meshes in n-dimensions. This means that the capabilities and characteristics of standard approximation methods can be retained in our selectively refined models. We also show that a selectively refined approximation produced in this manner can be smoothly geometrically morphed into another selective refinement in order to satisfy modified refinement criteria. This geometric morphing is necessary to ensure that detail can be added and removed from models which are selectively refined with respect to their impact on the current view frustum. For example, if a model is selectively refined in this manner and the viewer approaches the model then more detail may have to be introduced to the displayed mesh in order to ensure that it satisfies the new refinement criteria. By geometrically morphing this introduction of detail we can ensure that the viewer is not distracted by "popping" artifacts.
|
190 |
Adaptation of statistical language models for automatic speech recognitionClarkson, P. R. January 1999 (has links)
Statistical language models encode linguistic information in such a way as to be useful to systems which process human language. Such systems include those for optical character recognition and machine translation. Currently, however, the most common application of language modelling is in automatic speech recognition, and it is this that forms the focus of this thesis. Most current speech recognition systems are dedicated to one specific task (for example, the recognition of broadcast news), and thus use a language model which has been trained on text which is appropriate to that task. If, however, one wants to perform recognition on more general language, then creating an appropriate language model is far from straightforward. A task-specific language model will often perform very badly on language from a different domain, whereas a model trained on text from many diverse styles of language might perform better in general, but will not be especially well suited to any particular domain. Thus the idea of an <I>adaptive</I> language model whose parameters automatically adjust to the current style of language is an appealing one. In this thesis, two adaptive language models are investigated. The first is a mixture-based model. The training text is partitioned according to the style of text, and a separate language model is constructed for each component. Each component is assigned a weighting according to its performance at modelling the observed text, and a final language model is constructed as the weighted sum of each of the mixture components. The second approach is based on a cache of recent words. Previous work has shown that words that have occurred recently have a higher probability of occurring in the immediate future than would be predicted by a standard triagram language model. This thesis investigates the hypothesis that more recent words should be considered more significant within the cache by implementing a cache in which a word's recurrence probability decays exponentially over time. The problem of how to predict the effect of a particular language model on speech recognition accuracy is also addressed in this thesis. The results presented here, as well as those of other recent research, suggest that perplexity, the most commonly used method of evaluating language models, is not as well correlated with word error rate as was once thought. This thesis investigates the connection between a language model's perplexity and its effect on speech recognition performance, and will describe the development of alternative measures of a language models' quality which are better correlated with word error rate. Finally, it is shown how the recognition performance which is achieved using mixture-based language models can be improved by optimising the mixture weights with respect to these new measures.
|
Page generated in 0.0317 seconds