• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 551
  • 51
  • 43
  • 41
  • 32
  • 29
  • 27
  • 20
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Optimisation of performance in the triple jump using computer simulation

Allen, Samuel J. January 2010 (has links)
While experimental studies can provide information on what athletes are doing, they are not suited to determining what they should be doing in order to improve their performance. The aim of this study was to develop a realistic computer simulation model of triple jumping in order to investigate optimum technique. A 13-segment subject-specific torque-driven computer simulation model of triple jumping was developed, with wobbling masses within the shank, thigh, and torso. Torque generators were situated at each hip, shoulder, knee, ankle, and ball joint. Kinetic and kinematic data were collected from a triple jump using a force plate and a Vicon motion analysis system. Strength characteristics were measured using an isovelocity dynamometer from which torque-angle and torque-angular velocity relationships were calculated. Segmental inertia parameters were calculated from anthropometric measurements. Viscoelastic parameters were obtained by matching an angle-driven model to performance data for each phase, and a common set for the three contact phases was determined. The torque-driven model was matched to performance data for each phase individually by varying torque generator activation timings using a genetic algorithm. The matching produced a close agreement between simulation and performance, with differences of 3.8%, 2.7%, and 3.1% for the hop, step, and jump phases respectively. The model showed good correspondence with performance data, demonstrating sufficient complexity for subsequent optimisation of performance. Each phase was optimised for jump distance with penalties for excessive angular momentum at take-off. Optimisation of each phase produced an increase in jump distance from the matched simulations of 3.3%, 11.1%, and 8.2% for the hop, step, and jump respectively. The optimised technique showed a symmetrical shoulder flexion consistent with that employed by elite performers. The effects of increasing strength and neglecting angular momentum constraints were then investigated. Increasing strength was shown to improve performance, and angular momentum constraints were proven to be necessary in order to reproduce realistic performances.
72

Knowledge elicitation in design : a case study of page layout design

Tunnicliffe, A. J. January 1990 (has links)
Knowledge elicitation remains a fundamental feature of Knowledge Based Systems evolution. However, there is insufficient evidence to support the presumption that the knowledge elicitation philosophy is viable for design. Scant effort has been applied to research into techniques for design elicitation, and the nature of design is poorly considered. In particular, design tasks that involve visual design skills appear especially neglected. The scarcity of proven knowledge elicitation methods for design has not dampened the enthusiasm for "Intelligent" Computer Aided Design Systems. However, it is argued that design knowledge acquired from ad hoc, unsubstantiated and untested procedures, and knowledge that is undocumented and untested cannot be considered reliable. Indeed, it is extensively observed that a deficiency of intelligent performance exists in current ICAD systems, and the exigency for laudable design elicitation methods is prevalent Here, knowledge elicitation in design is promoted through a review of design and knowledge elicitation research literature. Design must be considered dissimilar to scientific problem solving, and the holistic nature of the task is an important characteristic. Further, the spatial, diagrammatical and drawing forms of communication, that are manifest in design, must be tackled. A method for the elicitation of design knowledge is proposed, and tested in the domain of page layout design. Computerised methods of knowledge acquisition currently lack the sophistication to expound the enigmas associated with design elicitation. It is concluded that the personal interview strategy is appropriate, in which the nature of the design task, and the visual and spatial components are equitably considered. The understanding of page layout design is demonstrated in a communicable report, and tested through an evaluation study. It is concluded that methodological principles of knowledge elicitation are appropriate to design, and a suitable method is outlined. The domain of page layout design illustrates that the techniques are successful, useful and practical.
73

Application of Bayesian Belief Networks to system fault diagnostics

Lampis, Mariapia January 2010 (has links)
Fault diagnostic methods aim to recognize when a fault exists on a system and to identify the failures which have caused it. The fault symptoms are obtained from readings of sensors located on the system. When the observed readings do not match those expected then a fault can exist. Using the detailed information provided by the sensors a list of the failures that are potential causes of the symptoms can be deduced. In the last decades, fault diagnostics has received growing attention due to the complexity of modern systems and the consequent need of more sophisticated techniques to identify failures when they occur. Detecting the causes of a fault quickly and efficiently means reducing the costs associated with the system unavailability and, in certain cases, avoiding the risks of unsafe operating conditions. Bayesian Belief Networks (BBNs) are probabilistic graphical models that were developed for artificial intelligence applications but are now applied in many fields. They are ideal for modelling the causal relations between faults and symptoms used in fault diagnostic processes. The probabilities of events within the BBN can be updated following observations (evidence) about the system state. In this thesis it is investigated how BBNs can be applied to the diagnosis of faults on a system with a model-based approach. Initially Fault Trees (FTs) are constructed to indicate how the component failures can combine to cause unexpected deviations in the variables monitored by the sensors. The FTs are then converted into BBNs and these are combined in one network that represents the system. The posterior probabilities of the component failures give a measure of which components have caused the symptoms observed. The technique is able to handle dynamics in the system introducing dynamic patterns for the sensor readings in the logic structure of the BBNs. The method is applied to two systems: a simple water tank system and a more complex fuel rig system. The results from the two applications are validated using two simulation codes in C++ by which the system faulty states are obtained together with the failures that cause them. The accuracy of the BBN results is evaluated by comparing the actual causes found with the simulation with the potential causes obtained with the diagnostic method.
74

Digital particle image velocimetry (DPIV) : systematic error analysis

Putman, Edward R. J. January 2011 (has links)
Digital Particle Image Velocimetry (DPIV) is a flow diagnostic technique that is able to provide velocity measurements within a fluid whilst also offering flow visualisation during analysis. Whole field velocity measurements are calculated by using cross-correlation algorithms to process sequential images of flow tracer particles recorded using a laser-camera system. This technique is capable of calculating velocity fields in both two and three dimensions and is the most widely used whole field measurement technique in flow diagnostics. With the advent of time-resolved DPIV it is now possible to resolve the 3D spatio-temporal dynamics of turbulent and transient flows as they develop over time. Minimising the systematic and random errors associated with the cross-correlation of flow images is essential in providing accurate quantitative results for DPIV. This research has explored a variety of cross-correlation algorithms and techniques developed to increase the accuracy of DPIV measurements. It is shown that these methods are unable to suppress either the inherent errors associated with the random distribution of particle images within each interrogation region or the background noise of an image. This has been achieved through a combination of both theoretical modelling and experimental verification for a uniform particle image displacement. The study demonstrates that normalising the correlation field by the signal strength that contributes to each point of the correlation field suppresses both the mean bias and RMS error. A further enhancement to this routine has lead to the development of a robust cross-correlation algorithm that is able to suppress the systematic errors associated to the random distribution of particle images and background noise.
75

Audio-coupled video content understanding of unconstrained video sequences

Lopes, Jose E. F. C. January 2011 (has links)
Unconstrained video understanding is a difficult task. The main aim of this thesis is to recognise the nature of objects, activities and environment in a given video clip using both audio and video information. Traditionally, audio and video information has not been applied together for solving such complex task, and for the first time we propose, develop, implement and test a new framework of multi-modal (audio and video) data analysis for context understanding and labelling of unconstrained videos. The framework relies on feature selection techniques and introduces a novel algorithm (PCFS) that is faster than the well-established SFFS algorithm. We use the framework for studying the benefits of combining audio and video information in a number of different problems. We begin by developing two independent content recognition modules. The first one is based on image sequence analysis alone, and uses a range of colour, shape, texture and statistical features from image regions with a trained classifier to recognise the identity of objects, activities and environment present. The second module uses audio information only, and recognises activities and environment. Both of these approaches are preceded by detailed pre-processing to ensure that correct video segments containing both audio and video content are present, and that the developed system can be made robust to changes in camera movement, illumination, random object behaviour etc. For both audio and video analysis, we use a hierarchical approach of multi-stage classification such that difficult classification tasks can be decomposed into simpler and smaller tasks. When combining both modalities, we compare fusion techniques at different levels of integration and propose a novel algorithm that combines advantages of both feature and decision-level fusion. The analysis is evaluated on a large amount of test data comprising unconstrained videos collected for this work. We finally, propose a decision correction algorithm which shows that further steps towards combining multi-modal classification information effectively with semantic knowledge generates the best possible results.
76

High-speed pattern cutting using real-time computer vision techniques

Tao, Li G. January 1994 (has links)
This thesis presents a study of computer vision for guiding cutting tools to perform high-speed pattern cutting on deformable materials. Several new concepts on establishing a computer vision system to guide a C02 laser beam to separate lace are presented. The aim of this study is to determine a cutting path on lace in real-time by using computer vision techniques, which is part of an automatic lace separation project. The purpose of this project is to replace the current lace separation process which uses a mechanical knife or scissors. The research on computer vision has concentrated on the following aspects: 1. A weighted incremental tracking algorithm based on a reference map is proposed, examined and implemented. This is essential for tracking an arbitrarily defined path across the surface of a patterned deformable material such as lace. Two methods, a weighting function and infinite impulse response filter, are used to cope with lateral distortions of the input image. Three consecutive map lines matching with one image line is introduced to cope with longitudinal distortion. A software and hardware hybrid approach boosts the tracking speed to hnls that is 2-4 times faster than the current mechanical method. 2. A modified Hough transform and the weighted incremental tracking algorithm to find the start point for tracking are proposed and investigated to enable the tracking to start from the correct position on the map. 3. In order to maintain consistent working conditions for the vision system, the light source, camera threshold and camera scan rate synchronisation with lace movement are studied. Two test rigs combining the vision and cutting system have been built and used to cut lace successfully.
77

The development of a mathematical model and computer program for simulating the injection moulding of thermosetting elastomer materials

Bowers, Stephen January 1990 (has links)
A mathematical model for the simulation of the injection moulding of thermosetting elastomers has been developed. The model uses suitably reduced forms of the fundamental equations of continuity, momentum and energy as a basis, with a constitutive equation to describe how the elastomer viscosity varies with local flow conditions. A cure model is used to calculate cure levels during the injection phase, and the time taken for the final moulded component to reach a specified minimum cure level during the subsequent cure cycle. Moulds are defined by splitting the various elastomer flowpaths into a network of end to end connected geometric entities of simple cross section, for instance circular, rectangular and annular. The moulds elements are discretised using a finite difference mesh and the equations which comprise the model are cast into a suitable finite difference form for solution. Solution of the continuity and momentum equations involves numerical integration using the trapezoidal rule and the energy equation is solved using a fully implicit Crank Nicholson method, since this gives unconditional stability. The model also allows for a wall slip boundary condition. The flow model has been experimentally validated by simulating an extrusion rheometer and comparing predicted capillary pressure drops with measured ones. It has also been validated by comparing real injection moulding pressure drops with corresponding predictions. The cure simulation has been validated by comparing predicted cure times with measured cure times taken during the injection moulding trials. The effect of the variation of material properties, heat transfer coefficient and finite difference mesh geometric parameters on simulated results have been assessed. The effect of wall slip on simulated injection results has been investigated.
78

Numerical modelling of multi-material interfaces

Hill, Ryan January 2011 (has links)
Remapping (conservative interpolation), within arbitrary Lagrangian-Eulerian (ALE) schemes, requires the values of the scalar to be interpolated from one computational mesh to another which has differing geometry. Advection methods are typically utilised for the remapping stage, with fluxes being created by overlapping volumes between adjacent elements. In the thesis, a second-order, conservative, sign-preserving remapping scheme is developed utilising concepts of the Multidimensional Positive Definite Advection Transport Algorithm (MPDATA). The basic non-oscillatory and non-oscillatory infinite gauge options are derived for remapping in volume co-ordinates. For the first time, an MPDATA based remapping has been successfully implemented into full ALE schemes. Inherent properties of MPDATA are exploited to reduce wall heating errors via the second-order filtering option. The resulting increase in accuracy and symmetry of numerical solutions is demonstrated. For material interfaces, an adaptive mixed cell approach is proposed which takes advantage of the efficient computational stencil of MPDATA. The proposed approach utilises all available data in the calculation of pseudo velocities in MDPATA in order to retain second-order accuracy and multi-dimensionality at material interfaces. The effectiveness of the adaptive mixed cell approach is highlighted via examples featuring artificial material interfaces. Theoretical developments are supported by numerical testing. All test cases compare the accuracy of the MPDATA based schemes to a van Leer based scheme generalised to multiple dimensions via isotropic or Strang split remapping. The results demonstrate the advantages of the fully multi-dimensional MPDATA remapping.
79

A system for aiding the user assimilation of acquired motorsport data

Parker, Matthew C. January 2010 (has links)
A racing car is a complex machine, featuring many adjustable components, used to influence the car's performance and tune it to a circuit, the prevailing conditions and the driver's style. A race team must continually monitor the car's performance and a race engineer communicates with the driver to decide how best to optimise the car as well as how to extract most from the driver himself. Analysis of acquired vehicle performance data is an intrinsic part of this process. This thesis presents an investigation into methods to aid the motorsport user's assimilation of acquired vehicle performance data. The work was directly prompted by personal experience and published opinion. These both find that the full potential of acquired data in motorsport is seldom realised, primarily because of the time available to analyse data with the resources available to a racing team. A complete solution including data management methods and visualisation tools was conceived here as a means of addressing these issues. This work focuses on part of the overall solution concept; the development of a visualisation application giving the user a detailed and realistic three-dimensional replay of a data set. The vehicle s motion is recreated from acquired data through a kinematic vehicle model driven by measured damper and ride height data. Ground displacement is computed from wheel speed and accelerometer measurements as well as a new optical sensor approach aiming to achieve better accuracy. This implements a two dimensional auto-correlation of doubly exposed ground images, calibrated to distance on the basis of an integrated ride height measurement. Three sensor units are used to allow not only displacement but also heading data to be derived. The result of the work described in this thesis is the proof of principle of both a display and sensor system, both of which were deemed worthy of further study and development to fully meet the demands of the motorsport application. The visualisation tool presented a new and applicable method of viewing acquired data, whilst the sensor was proven as a new method of deriving vehicle position data, from potentially low cost hardware.
80

Methodologies for CIM systems integration in small batch manufacturing

Walton, Andrew N. January 1990 (has links)
This thesis is concerned with identifying the problems and constraints faced by small batch manufacturing companies during the implementation of Computer Integrated Manufacturing (CIM). The main aim of this work is to recommend generic solutions to these problems with particular regard to those constraints arising because of the need for ClM systems integration involving both new and existing systems and procedures. The work has involved the application of modern computer technologies, including suitable hardware and software tools, in an industrial environment. Since the research has been undertaken with particular emphasis on the industrial implementor's viewpoint, it is supported by the results of a two phased implementation of computer based control systems within the machine shop of a manufacturing company. This involved the specific implementation of a Distributed Numerical Control system on a single machine in a group technology cell of machines followed by the evolution of this system into Cell and Machine Management Systems to provide a comprehensive decision support and information distribution facility for the foremen and uperators within the cell. The work also required the integration of these systems with existing Factory level manufacturing control and CADCAM functions. Alternative approaches have been investigated which may have been applicable under differing conditions and the implications that this specific work has for CIM systems integration in small batch manufacturing companies evaluated with regard not only to the users within an industrial company but also the systems suppliers external to the company. The work has resulted in certain generic contributions to knowledge by complementing ClM systems integration research with regard to problems encountered; cost implications; the use of appropriate methodologies including the role of emerging international standard methods, tools and technologies and also the importance of 'human integration' when implementing CIM systems in a real industrial situation.

Page generated in 0.1884 seconds