• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 322
  • 117
  • 90
  • 45
  • 26
  • 22
  • 19
  • 2
  • Tagged with
  • 2106
  • 205
  • 146
  • 139
  • 120
  • 119
  • 118
  • 116
  • 116
  • 115
  • 111
  • 105
  • 94
  • 94
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

The effects of third-body materials in the wheel-rail interface

Hardwick, Christopher January 2013 (has links)
No description available.
522

Diode area melting : use of high power diode lasers in additive manufacturing of metallic components

Zavala Arredondo, Miguel Angel January 2017 (has links)
Additive manufacturing processes have been developed to a stage where they can now be used to manufacture net-shape high-value components. Selective Laser Melting (SLM) comprises of either a single or multiple deflected high energy fibre laser source(s) (e.g. 200 – 400 W each) to raster scan, melt and fuse layers of metallic powdered feedstock. The beam(s) is(are) deflected by a Scanning Galvo Mirror System and an F-theta lens is used to provide a flat field at the image plane of the scanning system. However, this deflected laser raster scanning methodology is high cost (addition of multiple high-power deflected lasers in SLM for increase productivity can suffer penalties of ~£170K for each additional laser), energy inefficient (wall-plug efficiency of typical SLM fibre laser sources ~50 % [1]) and encounters significant limitations on output productivity due to the rate of feedstock melting (e.g. typical theoretical build rate of SLM of stainless steel < 2.8 mm3/s (< 10cm3/min) [2]). This work details the development of a new additive manufacturing process known as Diode Area Melting (DAM) featuring multiple high efficient laser sources (i.e. > 60 % wall-plug efficiency [1]) with scalability potential (< £100 penalty per additional laser beam for increase productivity). This process utilises customised architectural arrays of low power laser diode emitters (i.e. ~5W laser power) for high speed parallel processing (theoretical build rate of scaled DAM of stainless steel > 2.8 mm3/s (> 10cm3/min)) of metallic feedstock. Individually addressable diode emitters are used to selectively melt feedstock from a pre-laid powder bed. The laser diodes operate at shorter laser wavelengths (808 nm) than conventional SLM fibre lasers (1064 nm) theoretically enabling more efficient energy absorption for specific materials [3][4]. The melting capabilities of the DAM process were tested for low melting point eutectic BiZn2.7 elemental powders, AlSi12 and higher temperature pre-alloyed 17-4 and 316L stainless steel powders. The process was shown to be capable of fabricating controllable geometric features with evidence of complete melting and fusion between multiple powder layers. This investigation presents a parametric analysis of the DAM process, identifying the effect of powder characteristics, laser beam profile, laser power and scan speed on the porosity of a single layer sample. Also presented is the effect of process energy density on melt pool depth (irradiated thermal energy penetration capable of achieving melting) on 316L stainless steel powder. An analysis of the density and the melt depth fraction of single layers is presented in order to identify the conditions that lead to the fabrication of fully dense DAM parts. Energy densities in excess of 86 J/mm3 were theorised as sufficient to enable processing of fully dense layers. Finally, this investigation presents the first work modelling the DAM process, detailing the unique thermal profiles experienced with the laser processed powder bed. Process optimisation is improved through modelling thermal temperature distribution, targeting processing conditions inducing full melting for variable powder layer thickness. In this work the developed thermal model simulates the processing of 316L stainless steel and is validated with experimental trials. Key findings that have been identified in the present research include the following: • Edge emitting diode laser modules featuring multiple ~5 W emitters, can be used directly in AM of metallic components. • Typical 808 nm diode lasers wavelength enables high laser absorption mechanisms in a metal powder-bed based AM process, which in turn allows the use of lower laser power (< 5 W) than the conventionally used in SLM (100-400 W). • Temperatures in excess of 1450 ºC can be reached in metallic powder beds (stainless steel) with < 5 W diode-laser spots using appropriate optical mechanisms to collimate and focus the low-quality beam (27º and 7º divergence in the fast and slow axis respectively) down to < 250 µm melting spots. • It has been identified the ability to near-net shape and process material with melt temperatures in excess of 1450 ºC (i.e. stainless steel powder) using multiple individually addressable and non-deflected low power diode laser beams in order to scan in parallel, selectively melting material from a powder bed. • DAM process parameters including laser beam profile (i.e. spot spacing and spot dimensions), particle size distribution (emissivity and conductivity of the powder), laser power and scan speed affect the porosity and melt-pool uniformity of DAM components. • An energy density of 86 J/mm3 can be theorised as the minimum required for fully dense DAM (stainless steel) components. • Effective melt area in DAM can be 6.67 % in excess of the actual spots size (i.e. 4.75 mm laser beam width has an effective melt width of 5.067 mm). • Temperature gradients and cooling rates during DAM processing of metallic feedstock are similar to optimised pre-heated SLM mechanisms with low residual stress formation.
523

On Bayesian networks for structural health and condition monitoring

Fuentes, Ramon January 2017 (has links)
The first step in data-driven approaches to Structural Health Monitoring (SHM) is that of damage detection. This is a problem that has been well studied in laboratory conditions. Yet, SHM remains an academic topic, not yet widely implemented in industry. One of the main reasons for this is arguably the difficulty in dealing with Environmental and Operational Variabilities (EOVs), which have a tendency to influence damage-sensitive features in ways similar to damage itself. A large number of the methods developed for SHM applications make use of linear Gaussian models for various tasks including dimensionality reduction, density estimation and system identification. A wide range of linear Gaussian models can be formulated as special cases of a general class of probabilistic graphical models, or Bayesian networks. The work presented here discusses how Bayesian networks can be used systematically to approach different types of damage detection problems, through their likelihood function. A likelihood evaluates the probability that an observation belongs to a particular model. If this model correctly captures the undamaged state of the system, then a likelihood can be used as a novelty index, which can point to the presence of damage. Likelihood functions can be systematically exploited for damage detection purposes across the vast range of linear Gaussian models. One of the key benefits of this fact is that simple models can easily be extended to mixtures of linear Gaussian models. It is shown how this approach can be effective in dealing with operational and environmental variabilities. This thesis thus provides a point of view on performing novelty detection under this wide class of models systematically with their likelihood functions. Models that are typically used for other purposes can become powerful novelty detectors in this view. The relationship between Principal Component Analysis (PCA) and Kalman filters is a good example of this. Under the graphical model perspective these two models are a simple variation of each other, where they model data with and without time dependence. Provided these models are trained with representative data from a non-damaged system, their likelihood function presents a useful novelty index. Their limitation to modelling linear Gaussian data can be overcome through the mixture modelling interpretation. Through graphical models, this is a straightforward extension, but one that retains a probabilistic interpretation. The impact of this interpretation is that environmental and operational variability, as well as potential nonlinearity, in SHM features can be captured by these models. Even though the interpretation changes depending on the model, the likelihood function can consistently be used as a damage indicator, throughout models like Gaussian mixtures, PCA, Factor Analysis, Autoregressive models, Kalman filters and switching Kalman filters. The work here focuses around these models. There are various ways in which these models can be used, but here the focus is narrowed to exploring them as novelty detectors, and showing their application in different contexts. The context in this case refers to different types of SHM data and features, as this could be either vibration, acoustics, ultrasound, performance metrics, etc. %The thesis divides into three main sections. The first presents an overview and scope, with introductions to SHM data, machine learning and the use of likelihood functions for novelty detection. This thesis provides a discussion on the theoretical background for probabilistic graphical models, or Bayesian networks. Separate chapters are dedicated to the discussion of Bayesian networks to model static and dynamic data (with and without temporal dependencies, respectively). Furthermore, three different application examples are presented to demonstrate the use of likelihood function inference for damage detection. These systems are a simulated mass-spring-damper system, with varying stiffness in its non-damaged condition, and with a cubic spring nonlinearity. This system presents a challenge from the point of view of the characterisation of the changing environment in terms of global stiffness and excitation energy. It is shown how mixtures of PCA models can be used to tackle this problem if frequency domain features are used, and mixtures of linear dynamical systems (Kalman filters) can be used to successfully characterise the baseline undamaged system and to identify the presence of damage directly from time domain measurements. Another case study involves the detection of damage on the Z-24 bridge. This is a well-studied problem in SHM research, and it is of interest due to the nonlinear stiffness effect due to temperature changes. The features used here are the first four natural frequencies of the bridge. It is demonstrated how a Gaussian mixture model can characterise the undamaged condition, and its likelihood is able to accurately predict the presence of damage. The third case study involves the prediction of various stages of damage on a wind turbine bearing. This is an experimental laboratory investigation - and the problem is also tackled with a Gaussian mixture model. This problem is of interest because the lowest damage level seeded in the bearing was subsurface yield. This is of great relevance to the wind turbine community, as detecting this level of damage is currently not feasible. Features from Acoustic Emission (AE) measurements were used to train a Gaussian mixture model. It is shown that the likelihood function of this model can correctly predict the presence of damage.
524

Designing a task assessment tool for ease and risk within the domestic environment

Zaheer, Asim January 2017 (has links)
Activities of Daily Living (ADL) and Instrumental Activities of Daily Living (IADL) enable people to continue to live independently, as far as possible. Slowing down a person’s decline or utilising equipment to maintain independence is a growing area of research. However, how we carry out daily tasks within the home can accelerate this decline. To date, little or no consideration has been given to quantifying load and the risk level associated with the performance of daily tasks within the home environment. This study evaluates and quantifies load and the risk level associated with the performance of domestic tasks which could be responsible for a person’s change in behaviour in the later stages of life. In order to understand the IADL tasks, an initial survey was used to gather different people’s perceptions about these tasks, and then to discover the hardest sub-task within the selected tasks. An observational study used existing ergonomic assessment methods to evaluate the postural load, and revealed that existing ergonomic tools are not enough on their own as they did not identify other risks which are associated with the performance of daily tasks. Finally, a task assessment tool for ease and risk (AER) was developed to evaluate and quantify the risk associated with the performance of daily tasks. AER is useful in the detection of early warnings (pre-event) for healthy individuals as well as for those undergoing rehabilitation, as it can easily identify the tasks that are hardest to perform. The tool is based on three risk parameters: (1) psychological perception of the tasks, (2) adopted postures and (3) manual handling. It is capable of assessing the risk level associated with individual tasks while simultaneously assessing the domestic load over a period of time. The novelty of this work is to propose a self-assessment tool which provides the knowledge about a person’s own risk associated with the performance of domestic tasks. The initial development of AER consisted of two phases: (1) development of AER and (2) evaluation of user trials, based on (a) ease of use of AER record sheet and (b) validity study. The AER trials overall used 20 healthy able-bodied participants and both trials were performed in the home environment. AER consists of a booklet and record sheets and specifically covers instrumental activities of daily living (IADL)[1] tasks but can also be extended to cover all tasks performed in the home environment. In the ease of use trial, the feedback questionnaire confirmed that AER is easy to use, free from ambiguity, applicable to almost all the tasks performed in the home environment and almost all participants agreed that AER does not require training for assessment. In the validity trials, the AER predicted risk level is measured in relation to perceived discomfort and it was found that AER has high sensitivity (78%), specificity (74%) and predictive (73% positive and 80% negative) values which revealed that AER is a sensitive and useful tool for identifying risk and perceived discomfort in performing daily tasks. It also concluded that the participants’ self-assessed (IADL) exposure scores were reasonably similar as compared to the researcher’s assessment and revealed that regular use of AER will help to obtain more accurate and reliable results. AER is able to assess the risk level associated with a single task and can also assess the general behaviour or domestic load over a period of time. AER is also helpful for identifying those tasks which required more caution when performed and which are responsible for someone’s change in behaviour in later life. Moreover, it is believed that AER may play a vital role in the development of comprehensive and proactive strategies for the detection of problems related to the home environment and for managing them effectively before it can affect a person’s quality of life.
525

The fatigue of carbon fibre composites containing interlaminar inkjet printed polymer droplets

Cartledge, Andrew January 2017 (has links)
The objective of this research was to investigate a novel method of increasing the interlaminar toughening of prepreg composites to improve their fatigue performance, with particular emphasis on the retardation of delamination. Thermoplastic polymers droplets were deposited onto a composite prepreg using inkjet printing. Poly(methyl methacrylate) (PMMA) and polyethylene glycol (PEG) were dissolved in suitable solvents, creating polymer solutions that could be deposited onto prepreg substrates with excellent volume and position control. The prepreg laminae were then laid up to create complete laminates which contained the toughening polymers exclusively in their interlaminar regions, therefore leaving the bulk matrix properties unchanged. Both four point bending and tensile mechanical tests were used to evaluate the performance of printed composites. Results showed that multidirectional laminates printed with a solution of 10% by weight 20,000 molecular weight (Mw) PEG in deionised water exhibited significant retardation of delamination, being shown to reduce the rate of delamination by almost half in comparison to unprinted laminates. These laminates also showed increases of tensile strength and modulus of 4.9% and 12.3% respectively. Whilst laminates printed with PMMA and lower molecular weight PEG solutions were also shown to improve static mechanical properties, they also resulted in greatly increased rates of delamination in cyclic loading. Scanning electron microscopy was also used to analyse the delaminated surfaces of tensile samples. It was found that PMMA did not affect the bulk matrix properties in the interlaminar region. However, PEG was shown to result in increased matrix toughness and fibre/matrix bonding. PEG 20,000Mw was shown to exhibit the greatest increase of fibre/matrix bonding, whilst PEG 1,500Mw was shown to increase the ductility of the interlaminar matrix to an extent which was detrimental to the delamination resistance of laminates. The work presented in this thesis generated new understanding of the damage mechanisms operating at the interlaminar interface of cyclically loaded inkjet printed composites. It also demonstrated that such printed composites could potentially outperform unprinted laminates.
526

Structural modification for chatter avoidance in high speed milling

Gibbons, Tom January 2017 (has links)
High speed machining operations, such as milling, are widely used in many industries including the aerospace sector. Elevated manufacturing costs coupled with ever more complex geometry components have led to the need to cut deeper and faster than ever; the dynamics of the structures involved, however, greatly restrict these boundaries. As speed and depth of cut are increased, self-excited vibrations, known as chatter, can occur due to the dynamic interaction between the tool tip and the workpiece. This has undesirable consequences such as poor surface finish, rapid tool and machine wear, and high noise levels, all of which lead to a reduction in production rates and an increase in production costs. Efforts to reduce and control chatter are therefore of great importance to industrial engineers. Selection and design of appropriate cutting tools, in an attempt to minimize the occurrence of chatter, are well established methods in the manufacturing industry; however, the choice of tool (type, diameter, length) is often restricted by the required operation, and since the spindle is set by the machine itself, the only other variable component in the machining structure is the tool holder. Little research has been carried out on the dynamics of the tool holder, despite it being a much simpler structure. This thesis shows that the geometry of the tool holder has a significant effect on the dynamics at the tool tip (source of chatter). Therefore the overall focus of this work is to show how the geometry of the tool holder may be utilised to control the speed and depth at which chatter occurs. Structural modification theory allows for models of smaller, simpler structures to be combined to predict the dynamics of larger, more complex structures. One of its main advantages is that experimental models may be combined with numerical models, allowing for experimental structures to be optimised numerically. Structural modification theory is applied to the problem of tool holder dynamics and chatter control. Inverse structural modification is used to optimise the tool holder geometry in terms of tool tip dynamics and, in-turn, the onset of chatter. A prototype tuneable tool holder prototype is designed and tested for use with this structural modification model. In addition to the focus on machining, it will be shown that spatial incompleteness is, perhaps, the largest draw-back with structural modification methods. For structural modification to give accurate results, a full spatial model, including rotational degrees of freedom is needed. Since direct measurement of such information requires specialist equipment, often not available to industry, numerical methods such as the finite difference technique have been developed to synthesise rotational data from translational measurements. As with any numerical method the accuracy of the finite difference technique relies on the correct spacing, however, there is currently no method to select an optimum spacing. An error analysis of the finite difference technique with non-exact data is carried out for application to rotational degree of freedom synthesis.
527

Investigation of the effects of soot on the wear of automotive engine components

Abdulqadir, Lawal Babatunde January 2017 (has links)
This study is motivated by the current trends in the automotive industry where an increasing level of soot is becoming a challenge to the internal combustion engine (ICE) components. Factors responsible for increasing soot level can be grouped into those explicitly driven by regulations and those driven by technological developments; as well as the requirements of extended drain intervals which lessen the overall maintenance cost of running a vehicle and reduce the environmental impacts of disposed oils. The research work basically involves a multi-pronged approach to evaluate the behaviour of sooty-oil surrogates (blends of fully formulated engine oil, HX5 SAE15W-40, and carbon black particles) under various conditions, using laboratory specimens and real engine components. Laboratory specimens are appropriate for evaluating systemic changes in various parameters; they, however, tend to be more homogeneous and have smoother surfaces than real specimens [1]. Real component testing affords more realistic surface contacts and contact geometries; and therefore loading conditions. Increasing soot-loading affects virtually every component involved in the combustion process; the most vulnerable are, however, the valve-train components. Consequently, the focus of the research was on the components within the engine's valve-train related specifically to the diesel engines. Based on the specific component simulations that were proposed, two standard commercial tribometers were used; namely, the High Frequency Reciprocating Rig (HFRR-Plint TE77) and the Mini Traction Machine (MTM-Plint TE54). Two others were designed and built as part of this research; these are: a Pin-in-Bush (PIB) Rig and a Chain Rig. The generic tests involving the basic geometric contacts, non-conformal point (ball-on-flat) and non-conformal line (ring-on-ring), were designed to mimic specific contact conditions in an internal combustion engine (ICE). The elephant’s foot/valve tip contact of the automotive engine valve train is perfectly simulated by the ball-on-flat test with a small stroke length; while ring-on-ring test undergoing sliding and rolling concurrently perfectly mimics the valvetrain’s cam lobe/roller follower contact geometry. The pin-in-bush conformal contact reciprocating sliding rig was designed to have conformal interacting surfaces which provides area contact for the pin and bush moving with sliding action on the lubricant film at the interface. The rig was also adapted, with minimum modifications, for the chain rig which was used for the real engine component test; specifically, Mercedes Benz M271 timing chain and sprocket. The results obtained from the chain were compared with a similar component that has undergone about 206,000 kilometres in a real engine. The results obtained in this study are comparable with other reported studies using carbon black to mimic engine soot. Although loads of data were generated in the course of various tests and post-test analysis, space limitation would not allow all of these to be presented. For the quantitative (friction coefficients, viscosity, wear volume and roughness profile) results, averages of the measured values were determined and used to present the results in various formats. However, only few of the qualitative analysis (microscopy, SEM, ContourGT and Alicona) results were presented in this thesis. Generally, the behavioural pattern of increasing volume of wear, viscosity, frictional force and coefficient of friction with increasing carbon black content remain essentially consistent with all the tests carried out under this study. However, the surface roughness revealed a somewhat smoother surface profile for moderate carbon black content (3-5wt%CB), especially at moderate normal load. Also, traction coefficient values decreased progressively and consistently with increasing carbon black content for mixed lubrication tests. Wear analysis revealed that the carbon black contamination effects in the contact are substantial at high concentration levels, high temperature and high normal loads as wear scar volume increased with these parameters. Evolution of the wear test further reveals that the progression of wear is a function of time. The severity of wear is more pronounced in the ball-on-flat tests operating under boundary lubrication as compared to the ring-on-ring (mixed lubrication) tests. Though, both contacts are non-conformal, but the load distribution was over a wider surface area with a line contact as against what happens with point contact, where a small area of contact carries the full load. Observed wear mechanisms can be interpreted as mild three-body abrasive wear at moderate carbon black contents (3-5wt%CB) where the well-dispersed carbon black particles freely roll and slide between the contacting specimens. While at higher carbon contents (7-12wt%CB) where agglomeration is more likely, the resultant wear mechanism is metal-to-metal contact due to starvation of lubricant which access into the contact is restricted by the carbon particle agglomerates, and thus resulting in two-body abrasion. Another possible mechanism at higher content is that some of these hard particles get into the contact zone where they become squeezed and get embedded into the surfaces. The embedded hard particles burrow through the contacting surfaces, forming grooves along the sliding direction. This is also classified as two-body abrasion. For the mini-traction machine (MTM) test, there was also cyclic stress-induced surface fatigue due to the combined effects of high contact pressure resulting from Hertzian line contact and high load and occurrence of fatigue-induced incipient damage in the subsurface, along the plane of maximum shear stress. The real engine component tests revealed multiple effects on the contacting bodies. These range from sliding and rolling actions between the sprocket teeth and the chain rollers along with the carbon black particles. There was also the effects of the impact stresses induced by the collision of chain with the sprocket teeth during engagement. The emerging results from various post-test analysis have also revealed the damaging effects any particle infiltration into the contact between various moving parts of the timing chain system can cause. The analysis of the ‘L’ (denoting the initial of the owner of real engine components) vehicle components also gives some valuable and instructive indications of possible damages that prolonged usage can cause to a component, particularly timing drive components which constitute parts of the most durable and lifetime components in an internal combustion engine (ICE). The novel idea on the use of ultrasound technique to measure the instantaneous film thickness (between the contacting specimens) of selected sooty-oil surrogates also recorded a significant level of success in terms of the obtained results which are comparable with analytical results in behavioural trends and numerical values. Notably, being a pioneering move towards determining the real-time film thickness of soot-contaminated oil, this indicates a potential for future research.
528

Mechanical characterisation of bone cells and their glycocalyx

Marcotti, Stefania January 2017 (has links)
Mechanotransduction refers to the process by which a cell is able to translate mechanical stimulation into biochemical signals. In bone, mechanotransduction regulates how cells detect environmental stimuli and use these to direct towards bone deposition or resorption. The mechanical properties of bone cells have an impact on the way mechanical stimulation is sensed, however, little evidence is available about how these properties influence mechanotransduction. The aim of the present Thesis was to quantify the mechanical properties of bone cells with a combined experimental and computational approach. Atomic force microscopy was employed to quantify the stiffness of bone cells and their glycocalyx. Changes in cell stiffness during osteocytogenesis were explored. Single molecule force spectroscopy of glycocalyx components was performed to evaluate their anchoring to the cytoskeleton. A single cell finite element model was designed to discern the contributions of sub-cellular components in response to simulated cell nano-indentation. Wide ranges of variation were found for bone cell stiffness and a method was proposed to determine suitable sample sizes to capture population heterogeneity. By targeting single components of the bone glycocalyx, it was possible to hypothesise different mechanotransduction mechanisms depending on the hyaluronic acid attachment to the cytoskeleton. The developed computational framework showed similar results to the nano-indentation experiments and highlighted the role of the actin cytoskeleton in withstanding compression and distributing strain within the cell.
529

Upper body accelerations as a biomarker of gait impairment in the early stages of Parkinson's disease

Buckley, Christopher January 2017 (has links)
Neurodegenerative diseases such as Parkinson's disease (PD) impair the ability to walk safely and efficiently. Currently, clinical rating scales designed to assess gait are often described to be subjective and lacking sensitivity to detect gait impairments at the early stage of the disease. Devices are available to objectively measure gait within research laboratories; however, they are often expensive and require trained expertise. Inertial measurement units (IMUs) may be an ideal device to measure gait while overcoming many of the limitations of other devices. They can measure movements of the upper body, which in PD is known to be impaired, and therefore may enable the calculation of a variety of acceleration based variables better capable to quantify impaired gait in PD. This thesis aimed to determine the ability of a variety of acceleration based variables obtained from different location of the upper body to detect movements symptomatic to PD from age matched controls. Variables yet to be applied to PD were tested and methodological reasons for why differing results found in the literature was analysed, in an attempt to develop a refined methodology specific to PD. Acceleration based variables were tested relative to, and combined with, variables obtained from a 7m pressure sensitive mat. It was tested whether these variables bring additional information about a patient's gait or if they are merely a reflection of lower limb mechanics, and, whether they can classify PD gait independently or in combination with a pre-existing spatiotemporal model of gait. Results showed that for a large population of people with early stage PD, upper body acceleration variables not previously applied to PD were capable to highlight gait impairments. However, attention must be made to the processing of the acceleration signals as the method used to realign signals to a global reference can significantly impact a variable's sensitivity to PD. Lastly, it was shown that the majority of upper body acceleration variables are unique from typically measured spatiotemporal information, and when using a multivariate approach, were equally capable to highlight gait impairment in PD. This thesis therefore proposes that variables calculated from the upper body using IMUs can be useful biomarkers of gait impairment at the early stage of PD, and if possible, should be used in conjunction with traditional approaches.
530

Computational mechanics of fracture and fatigue in composite laminates by means of XFEM and CZM

Tafazzolimoghaddam, Behrooz January 2017 (has links)
This thesis is on the computational fracture analysis of static and fatigue fracture in advanced composite materials using Extended Finite Element Method (XFEM). Both in analytical and numerical approaches, the techniques and procedures need adjustments to take account for numerous effects brought by the heterogeneous and orthotropic nature of the advanced composite materials. The frst part of this study is on the calculation of Energy Release Rate (ERR) for cracks in composite structures. J-Integrals are widely used in computational methods for the ERR evaluation however, they do not show consistency in structured materials when the crack is close to the material interfaces. Furthermore, when J-Integrals are implemented in XFEM, the enrichment functions of the crack-tip and the interfaces create even more complications. The outcome of the first study clarified that the linear elastic fracture mechanic (LEFM) approach on its own suffers from the effects caused by the crack-tip singularity and the stress field definition at the crack-tip. Cohesive Zone Model (CZM) is selected as an alternative to prevent some of the complications caused by the material heterogeneity and the singularity at the crack-tip. In-spite CZM is a damage based approach, it can be linked to the LEFM which is particularly useful for fatigue modelling. In the second part, the implementation of CZM in XFEM for quasi-static and fatigue modelling is presented. Unlike previous FE implementations of CZM [14, 136], the current approach does not include the undamaged material in the traction separation law to avoid enriching undamaged elements. For the high-cyclic fatigue model, a thermodynamically consistent approach links the Paris law crack growth rate to the damage evolution. A new numerical approach is proposed for the implementation of the CZM for quasi-static and fatigue fracture modelling in XFEM. The outcomes are then compared to the results of other experimental and numerical studies. The fatigue test results comply to the Paris law predictions however, linking Paris law with the damage evolution in the cohesive zone is prone to produce errors since different parts of the cohesive zone undergo different degradation rates.

Page generated in 0.0401 seconds