• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 4
  • 2
  • Tagged with
  • 38
  • 10
  • 7
  • 7
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The effect of increasing localized suction on a boundary layer and the generation of horseshoe vortices

Ovenden, Nicholas Charles January 2001 (has links)
No description available.
12

Front-tracking finite element methods for a void electro-stress migration problem

Sacconi, Andrea January 2015 (has links)
Continued research in electronic engineering technology has led to a miniaturisation of integrated circuits. Further reduction in the dimensions of the interconnects is impeded by the presence of small cracks or voids. Subject to high current and elastic stress, voids tend to drift and change shape in the interconnect, leading to a potential mechanical failure of the system. This thesis investigates the temporal evolution of voids moving along conductors, in the presence of surface diffusion, electric loading and elastic stress. We simulate a bulk-interface coupled system, with a moving interface governed by a fourth-order geometric evolution equation and a bulk where the electric potential and the displacement field are computed. We first give a general overview about geometric evolution equations, which define the motion of a hypersurface by prescribing its normal velocity in terms of geometric quantities. We briefly describe the three main approaches that have been proposed in the literature to solve numerically this class of equations, namely parametric approach, level set approach and phase field approach. We then present in detail two methods from the parametric approach category for the void electro-stress migration problem. We first introduce an unfitted method, where bulk and interface grids are totally independent, i.e. no topological compatibility between the two grids has to be enforced over time. We then discuss a fitted method, where the interface grid is at all times part of the boundary of the bulk grid. A detailed analysis, in terms of existence and uniqueness of the finite element solutions, experimental order of convergence (when the exact solution to the free boundary problem is known) and coupling operations (e.g., smoothing/remeshing of the grids, intersection between elements of the two grids), is carried out for both approaches. Several numerical simulations, both two- and three-dimensional, are performed in order to test the accuracy of the methods.
13

Curved Finite Elements in the Analysis of Solid, Shell and Plate Structures

Ahmad, S. January 1969 (has links)
The formulations of a number of curved finite elements for analysing solid, shell or plate structures are presented in this thesis. The hybrid method of derivation of element stiffness matrices based on the principle of minimum complementary energy using assumed stress distributions is developed for three dimensional stress analysis. A new process of curved thick shell finite elements based on the principle of minimum potential energy using assumed displacements is developed overcoming the previous approximations to the geometry of the structure and the neglect of shear deformations. A general formulation for a curved, arbitrary shape of shell is presented as well as a simplified form suitable for axisymmetric situations. Further modifications of the general thick shell element are presented in the form each membrane shell element and a thick plate clement. A computer program has been written and developed for each type of element. The method of computation is discussed in an appendix. A number of illustrated examples ranging from thin to thick shell applications are given to assess the versatility, accuracy and economy attainable with the new formulation&. These examples include a roof, a conduit, an idealised dam, an actual darn, tanks and a cooling tower for which many alternative solutions were used. The possible scope of further developments is suggested
14

Algorithms for the solution of large-scale scheduling problems

Dimitriadis, Andreas Dimitriou January 2011 (has links)
Modern multipurpose plants play a key role within the overall current climate of business globalisation, aiming to produce highly diversified products that can address the needs and demands of customers spread over wide geographical areas. The inherent size and diversity of this process gives rise to the need for planning and scheduling problems of large-scale combined production and distribution operations. In recent years, it has become possible to model combined production and distribution processes mathematically to a relatively high degree of detail. This modelling usually results in optimisation problems involving both discrete and continuous decisions. Despite much progress in numerical solution algorithms, the size and complexity of problems of industrial interest often exceed significantly those that can be tackled directly using standard algorithms and codes. This thesis is, therefore, primarily concerned with algorithms that exploit the structure of the underlying mathematical formulations to permit the practical solution of such problems. The Resource-Task Network (RTN) process representation is a general framework that has been used successfully for modelling and solving relatively small process scheduling problems. This work identifies and addresses the limitations that arise when RTNs are used for modelling large-scale production planning and scheduling problems. A number of modifications are suggested in order to make it more efficient in the representation of partial resource equivalence without losing any modelling detail. The length of the time horizon under consideration is a key factor affecting the complexity of the resulting scheduling problem. In view of this, this thesis presents two time-based decomposition approaches that attempt to solve the scheduling problem by considering only part of the time horizon in detail at any one step. The first time-based decomposition scheme is a rigorous algorithm that is guaranteed to derive optimal detailed schedules. The second scheme is a family of rolling horizon algorithms that can obtain good, albeit not necessarily optimal, detailed solutions to medium-term scheduling problems within reasonable computational times. The complexity of the process under consideration, and in particular the large numbers of interacting tasks and resources, is another factor that directly affects the difficulty of the resulting scheduling problem. Consequently, a task-based decomposition algorithm for complex RTNs is proposed, exploiting the fact that some tasks (e. g. those associated with transportation activities)
15

Exploratory Data Analysis of the Large Scale Gas Injection Test (Lasgit)

Bennett, Daniel January 2014 (has links)
This thesis presents an Exploratory Data Analysis (EDA) performed on the dataset arising from the operation of the Large Scale Gas Injection Test (Lasgit). Lasgit is a field scale experiment located approximately 420m underground at the Äspö Hard Rock Laboratory (HRL) in Sweden. The experiment is designed to study the impact of gas build-up and subsequent migration through the Engineered Barrier System (EBS) of a KBS-3 concept radioactive waste repository. Investigation of the smaller scale, or ‘second order’ features of the dataset are the focus of the EDA, with the study of such features intended to contribute to the understanding of the experiment. In order to investigate Lasgit’s substantial (26 million datum point) dataset, a bespoke computational toolkit, the Non-Uniform Data Analysis Toolkit (NUDAT), designed to expose and quantify difficult to observe phenomena in large, non-uniform datasets has been developed. NUDAT has been designed with capabilities including non-parametric trend detection, frequency domain analysis, and second order event candidate detection. The various analytical modules developed and presented in this thesis were verified against simulated data that possessed prescribed and quantified phenomena, before application to Lasgit’s dataset. The Exploratory Data Analysis of Lasgit’s dataset presented in this thesis reveals and quantifies a number of phenomena, for example: the tendency for spiking to occur within groups of sensor records; estimates for the long term trends; the temperature profile of the experiment with depth and time along with the approximate seasonal variation in stress/pore-water pressure; and, in particular, the identification of second order event candidates as small as 0.1% of the macro-scale behaviours in which they reside. A selection of the second order event candidates have been aggregated together into second order events using the event candidates’ mutual synchronicities. Interpretation of these events suggests the possibility of small scale discrete gas flow pathways forming, possibly via a dilatant flow mechanism. The interpreted events typical behaviours, in addition to the observed spiking tendency, also support the grouping of sensors by sensor type. The developed toolkit, NUDAT, and its subsequent application to Lasgit’s dataset have enabled an investigation into the small scale, or ‘second order’ features of the experiment’s results. The analysis presented in this thesis provides insight into Lasgit’s experimental behaviour, and as such, contributes to the understanding of the experiment.
16

Computational and algorithmic solutions for large scale combined finite-discrete elements simulations

Schiava D'Albano, Guillermo Gonzalo January 2014 (has links)
In this PhD some key computational and algorithmic aspects of the Combined Finite Discrete Element Method (FDEM) are critically evaluated and either alternative novel or improved solutions have been proposed, developed and tested. In particular, two novel algorithms for contact detection have been developed. Also a comparative study of different contact detection algorithms has been made. The scope of this work also included large and grand scale FDEM problems that require intensive use of CPU; thus, novel parallelization solutions for grand scale FDEM problems have been developed and implemented using the MPI (Message Passing Interface) based domain decomposition. In this context a special attention is paid to the rapidly developing multi-core desktop architectures. The proposed novel solutions have been intensively validated and verified and demonstrated using various problems from literature.
17

Bayesian-based techniques for tracking multiple humans in an enclosed environment

ur-Rehman, Ata January 2014 (has links)
This thesis deals with the problem of online visual tracking of multiple humans in an enclosed environment. The focus is to develop techniques to deal with the challenges of varying number of targets, inter-target occlusions and interactions when every target gives rise to multiple measurements (pixels) in every video frame. This thesis contains three different contributions to the research in multi-target tracking. Firstly, a multiple target tracking algorithm is proposed which focuses on mitigating the inter-target occlusion problem during complex interactions. This is achieved with the help of a particle filter, multiple video cues and a new interaction model. A Markov chain Monte Carlo particle filter (MCMC-PF) is used along with a new interaction model which helps in modeling interactions of multiple targets. This helps to overcome tracking failures due to occlusions. A new weighted Markov chain Monte Carlo (WMCMC) sampling technique is also proposed which assists in achieving a reduced tracking error. Although effective, to accommodate multiple measurements (pixels) produced by every target, this technique aggregates measurements into features which results in information loss. In the second contribution, a novel variational Bayesian clustering-based multi-target tracking framework is proposed which can associate multiple measurements to every target without aggregating them into features. It copes with complex inter-target occlusions by maintaining the identity of targets during their close physical interactions and handles efficiently a time-varying number of targets. The proposed multi-target tracking framework consists of background subtraction, clustering, data association and particle filtering. A variational Bayesian clustering technique groups the extracted foreground measurements while an improved feature based joint probabilistic data association filter (JPDAF) is developed to associate clusters of measurements to every target. The data association information is used within the particle filter to track multiple targets. The clustering results are further utilised to estimate the number of targets. The proposed technique improves the tracking accuracy. However, the proposed features based JPDAF technique results in an exponential growth of computational complexity of the overall framework with increase in number of targets. In the final work, a novel data association technique for multi-target tracking is proposed which more efficiently assigns multiple measurements to every target, with a reduced computational complexity. A belief propagation (BP) based cluster to target association method is proposed which exploits the inter-cluster dependency information. Both location and features of clusters are used to re-identify the targets when they emerge from occlusions. The proposed techniques are evaluated on benchmark data sets and their performance is compared with state-of-the-art techniques by using, quantitative and global performance measures.
18

Parallel finite element analysis

Margetts, Lee January 2002 (has links)
Finite element analysis is versatile and used widely in a range of engineering andscientific disciplines. As time passes, the problems that engineers and designers areexpected to solve are becoming more computationally demanding. Often theproblems involve the interplay of two or more processes which are physically andtherefore mathematically coupled. Although parallel computers have been availablefor about twenty years to satisfy this demand, finite element analysis is still largelyexecuted on serial machines. Parallelisation appears to be difficult, even for thespecialist. Parallel machines, programming languages, libraries and tools are used toparallelise old serial programs with mixed success. In some cases the serialalgorithm is not naturally suitable for parallel computing. Some argue that rewritingthe programs from scratch, using an entirely different solution strategy is a betterapproach. Taking this point of view, using MPI for portability, a mesh free elementby element method for simple data distribution and the appropriate iterative solvers,a general parallel strategy for finite element analysis is developed and assessed.
19

Non-linear finite element analysis of flexible pipes for deep-water applications

Edmans, Ben January 2013 (has links)
Flexible pipes are essential components in the subsea oil and gas industry, where they are used to convey fluids under conditions of extreme external pressure and (often) axial load, while retaining low bending stiffness. This is made possible by their complex internal structure, consisting of unbonded components that are, to a certain extent, free to move internally relative to each other. Due to the product's high value and high cost of testing facilities, much e ort has been invested in the development of analytical and numerical models for simulating flexible pipe behaviour, which includes bulk response to various loading actions, calculation of component stresses and use of this data for component fatigue calculations. In this work, it is proposed that the multi-scale methods currently in widespread use for the modelling of composite materials can be applied to the modelling of flexible pipe. This allows the large-scale dynamics of an installed pipe (often several kilometers in length) to be related to the behaviour of its internal components (with characteristic lengths in millimeters). To do this, a formal framework is developed for an extension of the computational homogenisation procedure that allows multiscale models to be constructed in which models at both the large and small scales are composed of different structural elements. Within this framework, a large-scale flexible pipe model is created, using a two-dimensional corotational beam formulation with a constitutive model representative of flexible pipe bulk behaviour, which was obtained by further development of a recently proposed formulation inspired by the analogy between the flexible pipe structural behaviour and that of plastic materials with non-associative flow rules. A three-dimensional corotational formulation is also developed. The model is shown to perform adequately for practical analyses. Next, a detailed finite element (FE) model of a flexible pipe was created, using shell finite elements, generalised periodic boundary conditions and an implicit solution method. This model is tested against two analytical flexible pipe models for several basic load cases. Finally, the two models are used to carry out a sequential multi-scale analysis, in which a set of simulations using the detailed FE model is carried out in order to find the most appropriate coefficients for the large-scale model.
20

Annotations sémantiques pour l'intéropérabilité des systèmes dans un environnement PLM / Semantic annotations for systèms interoperability in a PLM environment

Liao, Yongxin 14 November 2013 (has links)
Dans l'industrie l'approche de gestion du cycle de vie du produit (PLM) a été considérée comme une solution essentielle pour améliorer la compétitivité des produits. Elle vise à fournir une plate-forme commune qui rassemble les différents systèmes de l'entreprise à chaque étape du cycle de vie du produit dans ou à travers les entreprises. Bien que les principaux éditeurs de logiciels fassent des efforts pour créer des outils offrant un ensemble complet et intégré de systèmes, la plupart d' entre eux n'intègrent pas l'ensemble des systèmes. Enfin, ils ne fournissent pas une intégration cohérente de l'ensemble du système d'information. Il en résulte une sorte de « tour de Babel », où chaque application est considérée comme une île au milieu de l'océan de l'information, gérée par de nombreuses parties prenantes dans une entreprise, ou même dans un réseau d'entreprises. L'hétérogénéité des parties prenantes augmente le problème d'interopérabilité. L'objectif de cette thèse est de traiter la question de l'interopérabilité sémantique, en proposant une méthode d'annotation sémantique formelle pour favoriser la compréhension mutuelle de la sémantique de l'information partagée et échangée dans un environnement PLM / In manufacturing enterprises the Product Lifecycle Management (PLM) approach has been considered as an essential solution for improving the product competitive ability. It aims at providing a shared platform that brings together different enterprise systems at each stage of a product life cycle in or across enterprises. Although the main software companies are making efforts to create tools for offering a complete and integrated set of systems, most of them have not implemented all of the systems. Finally, they do not provide a coherent integration of the entire information system. This results in a kind of "tower of Babel", where each application is considered as an island in the middle of the ocean of information, managed by many stakeholders in an enterprise, or even in a network of enterprises. The different peculiarities of those stakeholders are then over increasing the issue of interoperability. The objective of this thesis is to deal with the issue of semantic interoperability, by proposing a formal semantic annotation method to support the mutual understanding of the semantics inside the shared and exchanged information in a PLM environment

Page generated in 0.037 seconds