• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 4
  • 2
  • Tagged with
  • 38
  • 10
  • 7
  • 7
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measuring and understanding segregation

Haw, David January 2015 (has links)
Schelling's famous "spatial proximity" model of segregation was first introduced in 1969. His work sets out to explore residential dynamics in populations of ) more than one ethnicity. In particular, the notion of "tolerance" shows how even a small need for familiarity within one's neighbourhood can result in largescale segregation. Schelling's "bounded neighbourhood model", outlined in the same publication, has however received much less attention from economists and mathematicians alike. This thesis provides a mathematical description of the latter model as a nonlinear dynamical system with which to explore the consequences of Schelling's intuition. In particular, we are able to deduce conditions under which segregation is not inevitable. The effect of varying the parameters and inputs of the model is studied in detail, and we use techniques from network theory and nonlinear dynamics in order to develop further variants of the model, beginning with those suggested by Schelling himself. Some new measures are developed that aid in the quantitative description of the equilibria of the model, based on the existing concepts of homophily and modularity. These developments enhance the power of Schelling's model in describing social dynamics. Additional work focuses on the study of networks of social interactions. In particular, we develop the idea of measuring segregation at the level of an individual agent via the use of different measures of centrality. Some simple examples illustrate the need for a range of measures in order to encapsulate an intuitive understanding of this complex phenomenon. This work enriches the toolbox of segregation measures available for future studies, allowing for deeper understanding of the structure of social systems.
2

Sensitivity analysis intolerance allocation

Wan Din, Wan Ibrahim January 2014 (has links)
In Computer Aided Design model the shape is usually defined as a boundary representation, a cell complex of faces, edges, and vertices. The boundary representation is generated from a system of geometric constraints, with parameters as degrees of freedom. The dimensions of the boundary representation are determined by the parameters in the CAD system used to model the part, and every single parametric perturbation may generate different changes in the part model shape and dimensions. Thus, one can compute dimensional sensitivity to parameter perturbations. A "Sensitivity Analysis" method is proposed to automatically quantify the dependencies of the Key Characteristic dimensions on each of the feature parameters in a CAD model. Once the sensitivities of the feature parameters to Key Characteristic dimensions have been determined the appropriate perturbations of each parameter to cause a desired change in critical dimension can be determined. This methodology is then applied to real applications of tolerancing in mechanical assembly models to show the efficiencies of this new developed strategy. The approach can identify where specific tolerances need to be applied to a Key Control Characteristic dimension, the range of part tolerances that could be used to achieve the desired Key Product Characteristic dimension tolerances, and also if existing part tolerances make it impossible to achieve the desired Key Product Characteristic dimension tolerances. This thesis provides an explanation of a novel automated tolerance allocation process for an assembly model based on the parametric CAD sensitivity method. The objective of this research is to expose the relationship between parameters sensitivity of CAD design in mechanical assembly product and tolerance design. This exposes potential new avenues of research in how to develop standard process and methodology for geometrical dimensioning and tolerancing (GD&T) in a digital design tools environment known as Digital MockUp (DMU).
3

Markov chain analysis of automatic transfer line with buffer stock

Buzacott, J. A. January 1967 (has links)
Reliability has long been one of the criteria which engineers have used when choosing between alternative designs of components or systems# However, reliability was primarily understood in a qualitative way- High reliability was achieved by a process of careful design, testing to failure, and redesigning to incorporate the lessons of the tests. When the product was in use, the details of in service failures would be fed back, to the designer. This process has been well applied in the aero engine field. When the test to failure approach could not be used because of the risk to human life associated with failure, products were usually overdesigned so that failure was manifestly unlikely. In the design of large dams the product was so expensive that a more quantitative approach had to be used The earliest application of probability and statistics to engineering occurred here. Methods were developed for estimating the return period or probability of large floods 'Which would overtop the structure (Fuller, 1914)
4

Verifying influence diagrams using dimensional analysis

Komanapalli, Golda Word January 2012 (has links)
System Dynamics (SD) is an approach to modelling that has found wide application in systems modelling. Although originally rooted in the engineering discipline, SD models have been formulated for problems in diverse fields including management, health, education and the social sciences and has attracted interest from modellers with little mathematical or engineering training. Developing a valid model is of primary importance in the System Dynamics modelling process. To establish that a model produces the right behaviour for the right reasons, it is essential to ensure that the structure of the model represents the corresponding real world system. Amongst the verification procedures employed in the model building process, dimensional analysis is used to verify the syntactical correctness of the model’s equations. However, dimensional analysis is frequently not given the highest priority as a verification tool in the model building process especially among those less experienced in mathematical modelling or who lack confidence in the use of mathematics. Therefore, the potential lack of dimensional consistency within some Systems Dynamics model raises serious doubts about the validity of the model, the results generated and hence the policy decisions that follow. The aim of this research is to summarise the various problems related to validation and verification that can occur in the process of SD modelling and to suggest an alternative approach. Firstly, this research devises an algorithm, “TAID” (Tree Analysis of Influence Diagrams), which analyses influence diagrams and derives dimensionally correct equations. Secondly, a software tool “I, Model” is developed to demonstrate the utility of the algorithm. Finally the revised approach to SD modelling is evaluated to measure its impact on the learning and modelling experience of SD modellers. The outcomes of the small evaluation study indicate that the modellers preferred the TAID approach over the traditional SD modelling approach during the model implementation stage. The two principal benefits that this research can offer to the SD community are: firstly, a software tool based on this new approach that can ease the transition from a qualitative system model to a verified quantitative simulation model and thereby extend the benefits of quantitative SD modelling to a wider range of users; and, secondly, to improve the SD modelling experience of all users of SD methodology but especially for those modellers who have limited mathematical experience.
5

Measuring distance between fuzzy concept lattices

Majidian, Andrei January 2015 (has links)
In data analysis, the arrangement of data in a hierarchical structure is an important technique for describing the relationships between data items. Formal concept analysis has been established as a mathematical tool for organising data into a hierarchical lattice-based structure, and the use of fuzzy formal concept analysis to produce a fuzzy lattice has been proposed as a way to model the imprecision and the vagueness inherent in many data sets. In a dynamic environment the relationship between data items may shift over time, and consequently the lattice generated from the new data may differ from the original. This thesis will be concerned with development of a metric measure that gauges the edit-distance between two fuzzy concepts and thereby two fuzzy lattices. Whilst it is possible to deal with a fuzzy context directly, a simpler approach is to discretise objects membership along the unit interval based on fuzzy entries for each attribute. We shall present a method to transform a fuzzy context to an equivalent crisp context that produces a lattice which is isomorphic to the lattice that emerges from the original fuzzy context. Fuzzy formal concept analysis can generate a large number of concepts some of which are very similar. This thesis will present an approach to factor out some of these smaller concepts utilising the edit-distance measure between the concepts. For a coarse classification of data, often a distance based clustering such as k-means clustering is used, we shall use formal fuzzy concept analysis along with the notion of the edit-distance to find the nearest concept to each cluster and thereby find the semantic definition of each cluster based on their attributes.
6

New approach to solving a spectral problem in a perturbed periodic waveguide

Robert, Kieran Jean-Baptiste January 2008 (has links)
This thesis presents a numerical investigation of a problem on a semi infinite waveguide. The domain considered here is of a much more general form than those that have been considered using classical techniques. The motivation for this work originates from the work in 28, where unlike here, a perturbation technique was used to solve a simpler problem.
7

Modelling the ultrasonic response from rough defects using efficient finite element modelling techniques

Pettit, James Richard January 2014 (has links)
The work of this Engineering Doctorate addresses the research and development of efficient Finite Element (FE) modelling techniques for calculating the ultrasonic response from rough defects for Non-Destructive Evaluation (NDE) applications specific to the nuclear power generation industry. The project has been carried out in collaboration with Imperial College London and Rolls-Royce allowing for the transfer of novel academic research into an applied industrial context. Within the UK nuclear power generation industry, one of the fundamental principles of regulation and operation is a robust safety culture where the highest levels of quality assurance are applied to safety critical components. This principle places a requirement on NDE to deploy reliable and accurate inspections to ensure the structural integrity of the plant and its components. To achieve this goal, modelling techniques can be used to aid in the design and justification of ultrasonic NDE inspections. For smooth, relatively large defects, analytical methods can provide an accurate scattering solution; however, for more realistic rough defects, the limitations of these methods are only applicable for specialised cases of roughness. Defects which possess rough surfaces greatly affect ultrasonic wave scattering behaviour. Ultrasonic NDE inspections of safety-critical components rely upon this response for detecting and sizing flaws. Reliable characterisation is crucial, so it is essential to find an accurate means to predict any reductions in signal amplitude. An extension of Kirchhoff theory has formed the basis for many practical applications; however, it is widely recognised that these predictions are pessimistic owing to analytical approximations. As a result, NDE inspections can be overly sensitive, meaning that small and insignificant indications are incorrectly classed as being potentially hazardous defects. This increases the likelihood of making false-calls and incurring unnecessary expenditure to the programme. A numerical full field modelling approach does not fall victim to such limitations, and therefore, FE modelling techniques have been developed to deliver a non-conservative methodology for the prediction of expected back-scattering from rough defects. This has been achieved in two parts: improved performance of absorbing boundary methods for use with commercial FE codes, and application of domain linking algorithms to NDE inspection problems. This thesis presents the development of these methods and their application to industrial NDE inspections. Ultimately, the findings of this work will aid in establishing more reliable, less conservative, reporting thresholds for the inspection of power plant components, reducing false call rates and therefore any unnecessary expenditure.
8

Investigation into the computer modelling of industrial processes

Dexter, Arthur January 1970 (has links)
No description available.
9

Advanced modelling of technical apparel

Tarrier, James January 2011 (has links)
The aim of this thesis is to investigate the possibility of using a commercial finite element analysis (FEA) package to assemble technical apparel from its constitutive pattern pieces. The potential to predict the form and functionality of an assembled garment without the need to create a physical prototype is attractive to pattern engineers and others involved in the design and development of technical apparel.
10

Dynamic extended finite element method (XFEM) analysis of discontinuous media

Toolabi, Milad January 2015 (has links)
The extended finite element method (XFEM) is found promising in approximating solutions to locally non-smooth features such as jumps, kinks, high gradients, inclusions, or cracks in solid mechanics problems. The XFEM uses the properties of the partition of unity finite element method (PUFEM) to represent the discontinuities without the corresponding finite element mesh requirements. In the present thesis numerical simulations of statically and dynamically loaded heterogeneous beams, heterogeneous plates and two-dimensional cracked media of isotropic and orthotropic constitutive behaviour are performed using XFEM. The examples are chosen such that they represent strong and weak discontinuities, static and dynamic loading conditions, anisotropy and isotropy and strain-rate dependent and independent behaviours. At first, the Timoshenko beam element is studied by adopting the Hellinger-Reissner (HR) functional with the out-of-plane displacement and through-thickness shear strain as degrees of freedom. Heterogeneous beams are considered and the mixed formulation has been combined with XFEM thus mixed enrichment functions are used. The results from the proposed mixed formulation of XFEM correlate well with analytical solutions and Finite Element Method (FEM) and show higher rates of convergence. Thus the proposed method is shear-locking free and computationally more efficient compared to its conventional counterparts. The study is then extended to a heterogeneous Mindlin-Reissner plate with out-of-plane shear assumed constant through length of the element and with a quadratic distribution through the thickness. In all cases the zero shear on traction-free surfaces at the top and bottom are satisfied. These cases involve weak discontinuity. Then a two-dimensional orthotropic medium with an edge crack is considered and the static and dynamic J-integrals and stress intensity factors (SIF's) are calculated. This is achieved by fully (reproducing elements) or partially (blending elements) enriching the elements in the vicinity of the crack tip or body. The enrichment type is restricted to extrinsic mesh-based topological local enrichment in the current work. A constitutive model for strain-rate dependent moduli and Poisson ratios (viscoelasticity) is formulated. The same problem is studied using the viscoelastic constitutive material model implemented in ABAQUS through an implicit user defined material subroutine (UMAT). The results from XFEM correlate well with those of the finite element method (FEM). It is shown that there is an increase in the value of maximum J-integral when the material exhibits strain rate sensitivity.

Page generated in 0.0195 seconds