• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • Tagged with
  • 2060
  • 2060
  • 2060
  • 2058
  • 169
  • 116
  • 116
  • 115
  • 114
  • 114
  • 114
  • 114
  • 114
  • 114
  • 114
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Discrete element modeling of cemented sand and particle crushing at high pressures

de Bono, John Patrick January 2013 (has links)
This project aims to provide an insight into the behaviour of cemented sand under high pressures, and to further the understanding of the role of particle crushing. The discrete element method is used to investigate the micro mechanics of sand and cemented sand in high-pressure triaxial tests and one-dimensional normal compression. Using the software PFC3D, a new triaxial model has been developed, which features an effective flexible membrane that allows free deformation of the specimen and the natural failure mode to develop. The model is capable of exerting and sustaining high confining pressures. Cementation has been modelled using inter-particle bonds, and a full investigation of the bond properties is presented, highlighting their influence on the macroscopic behaviour (e.g. failure mode and volumetric response). A simple particle breakage mechanism is used to model the one-dimensional normal compression of sand. By considering the stresses induced in a particle due to multiple contacts, and allowing particles to fracture without the use of agglomerates, this work aims to explain the mechanics of normal compression. The influence of the mechanics of fracture on the slope of the normal compression line is investigated, and the normal compression is linked to the evolution of a fractal particle size distribution. A new equation for the one-dimensional normal compression line is proposed, which includes the size-effect on average particle strength, and demonstrates agreement with experimental results. It is shown that this new equation holds for a wide range of simulations. The time dependence of particle strength is incorporated in to this model to simulate one-dimensional creep tests, leading to a new creep law. The normal compression of cemented sand is investigated, and the results show that bonding reduces particle crushing, and that it is both the magnitude and distribution of bond strengths that influence the compression curve of the structured material. Simulations are also presented that show that it is possible to capture the effects of particle crushing in high-pressure triaxial tests on both sand and cemented sand. Particle crushing is shown to be essential for capturing realistic volumetric behaviour, and the intrusive capabilities of the discrete element method are used to gain insight into the effects that cementation has on the degree of crushing.
2

A factorial approach to the study of alkali-silica reaction in concrete

Robert, E. C. January 1986 (has links)
This thesis describes a research project which investigates certain aspects of the alkali-silica reaction in concrete. A factorial approach was adopted for the experimental stages of the investigation. Such an approach is based on proven statistical techniques and has the advantage of allowing any interaction between the experimental factors to be studied in addition to studying the main effect of each separate factor. The experimental work was carried out in three stages. The first and second stages comprised physical tests which involved the monitoring of expansion in the test specimens. The third stage involved chemical analysis to determine the alkali content of the test specimens. The main parameters which were included in the investigation were: (a) the alkali content of the Portland cement (b) the use of pulverised fuel ash as a cement replacement material (c) the amount of reactive aggregate (d) the free water content of the mix In addition, the use of pulverised fuel ash as an admixture and the different effects produced by the two most commonly used reactive aggregates in laboratory tests (namely Pyrex glass and Beltane opal) were studied. In general the experimental work was carried out using standard test methods as specified by the British Standards Institution and the American Society for Testing and Materials. In particular the "mortar bar method", as described 1n A.S.T.M. e227, was used extensively. Where necessary the standard test methods were adapted to suit the experimental nature of the investigation. It is shown that the main effects of the experimental factors are all highly significant and thus have a considerable contribution towards the expansion of the mortar bar specimens. Moreover, the level of interaction between certain of the factors is also shown to be significant. This indicates that, in some cases, the effects of the individual factors are not independent. The experimental results also show the different responses produced by the two reactive aggregate materials. With respect to the use of pulverised fuel ash, certain time dependent trends are discussed which may suggest some long term instability in the beneficial effect which pulverised fuel ash can achieve by reducing expansions. The factorial approach assisted in highlighting this time dependent effect in the experimental data. The chemical analysis results have shown that pulverised fuel ash tends to increase the alkali content of the mortar bars. This is discussed with respect to the mode of operation of pulverised fuel ash in inhibiting expansion due to alkali-silica reaction, which is considered to be one of chemical reaction and not a simple dilution effect. An explanation of the contribution of pulverised fuel ash and the differences noted between its use with Pyrex glass and Beltane opal is hypothesised in terms of the relative affinity of each of these three materials for reaction with the hydroxyl ion present in the pore fluids.
3

Risk and safety in engineering processes

Lucic, Ivan January 2010 (has links)
This research project focused on the treatment of safety risks in railways. Existing methodologies for assessment and management of the safety risk on railways are mostly empirical and have been developed out of a need to satisfy the regulatory requirements and in response to a number of major accidents. Almost all of these processes and methodologies have been developed in support of approvals of specific products or very simple systems and do not add up to a holistic coherent methodology suited for analysis of modern, complex systems, involving many vastly different constituents (software, hardware, people, products developed in different parts of the world, etc.). The complexities of modern railway projects necessitate a new approach to risk analysis and management. At the outset, the focus of the research was on the organisation of the family of existing system analysis methodologies into a coherent, heterogeneous methodology. An extensive review of existing methodologies and processes was undertaken and is summarised in this thesis. Relationships between different methodologies and their properties were investigated seeking to define the rules for embedding these into a hierarchical nest and relating their emergent properties. Four projects were utilised as case studies for the evaluation of existing methodologies and processes and initial development. This thesis describes the methodology adopted in support of development of the System Safety Case and the structure of the document. Based on that experience and knowledge a set of high level requirements was identified for an integrated, holistic system safety analysis and management process. A framework consisting of existing and novel methodologies and processes was developed and trialled on a real life project. During the trial several gaps in the process were identified and adequate new methodologies or processes defined and implemented to complete the framework. The trial was very successful and the new framework, referred to as the Engineering Safety Case Management Process is implemented across the London Underground Upgrades Directory.
4

Development of wavelength dependent pH optical sensor using Layer-by-Layer technique

Raoufi, N. January 2014 (has links)
Stable and reliable operation of an optical sensor for pH monitoring is important for many industrial applications. This dissertation reports a series of studies on the development of novel and highly sensitive fibre optic sensors which are based on wavelength, instead of intensity changes and the development of thin film optical fibre working combinations for effectively enhancing the durability and value of the sensor probe. Several novel optical fibre sensors were fabricated and evaluated in this work. In order to measure the pH of a solution using optical methods, the sensor probes were prepared using layer~by-layer deposition techniques, a simple and versatile method to deposit a sensitive thin film i.e. active pH indicators on such optical fibre-based devices. In further work, the selection of a charged and water-soluble. pH indicator which introduces the highest wavelength shift, while varying the pH of the media, was investigated since the wavelength shift was considered as the basis of the sensitivity index. Brilliant yellow (BY) was applied as an indicator because of its greater wavelength shift with pH change compared to the use of other indicators. Poly (allylamine hydrochloride) (P AH) was also used as a crosslinker. To this end, the layers of BYIPAH were deposited on the bare silica core optical fibre using the layer-by-Iayer technique. The research was then developed to optimize the design factors that have an important effect on the sensitivity of the device. Utilizing V-shaped fibre with small radius which coated six bilayers of (BY IP AB) prepared from a polyion solution of low concentration was seen to provide a sensor with wider range of sensitivity which presents a highly sensitive device working over a smaller pH range offering higher resolution.
5

Investigation of advanced experimental and computational techniques for behavioural characterisation of Phase Change Materials (PCMs)

Stankovic, Stanislava January 2014 (has links)
The existing Phase Change Material (PCM) thermal investigation methods have significant drawbacks and limitations in terms of the correct determination of phase change temperature and enthalpy values. It results in the inaccuracy and sometimes absence of experimental data which are required for the implementation of PCM based Thermal Energy Storage (TES) systems. An advanced T-history method for PCM characterisation was developed to overcome some of the shortcomings of the existing PCM thermal investigation procedures. The advanced T-history setup and the instrumentation system coupled with the LabView virtual instrument, which allows the continuous acquisition of T-history signals, were carefully designed, developed and evaluated. The development process was performed by sequentially addressing all the issues relating to the control and sensing mechanisms of the T-history setup, measurement accuracy and precision, PCM data representation, hysteresis, and finally subcooling. The instrumentation system was iteratively redeveloped and validated in a series of studies until the ±0.5 °C accuracy in PCM related measurements was achieved. Once the desired temperature accuracy was reached the data evaluation technique was implemented in MATLAB to allow the determination of thermo-physical PCM properties from the measured T-history data. Furthermore, detailed studies of PCMs from the RT and PT organic series were performed. These comprehensive PCM investigations revealed various results including the details regarding the materials’ behaviour upon both cooling and heating, the heat release/storage in given 0.5 °C wide temperature intervals, the respective enthalpy-temperature curves, and the total heat released/stored with respect to mass and volume. The comparison of the RT results with the data provided by the manufacturer showed very good agreement in terms of temperature (±1 °C margin) and heat release/storage content (±10 % margin) proving the validity of the advanced T-history method. A new data evaluation technique considering subcooling was implemented in MATLAB to allow correct characterisation of inorganic PCMs and the obtained results were presented accordingly. Moreover, the PT PCM data were re-evaluated showing that subcooling in these materials can be neglected. Finally, pilot optical transmittance studies in a wide wavelength range (from 280 to 700 nm) at different temperatures were carried out and showed that the phase change temperature is one of the most determinative factors of material’s applicability in PCM enhanced glazing units used in solar applications. The results from the PCM characterisation measurements confirmed that a better planned PCM experimental tests in terms of more accurate and precise sensing and control modalities provide more comprehensive and reliable results than those described in the literature so far and hence enable the development of more efficient PCM based TES systems.
6

Wavelet-based characterization and stochastic modelling of avelet-based characterization and stochastic modelling of pulse-like ground motions on the time-frequency plane

Lungu, A. January 2014 (has links)
A novel non-separable non-stationary stochastic model for the representation and simulation of pulse-like earthquake ground motions (PLGMs), capable to accurately represent peak elastic and inelastic structural responses, is proposed in this work. Further, the model is employed for assessing the performance of several time-frequency representation techniques (the harmonic wavelet transform, the Meyer wavelet packets transform, the S-transform and the empirical mode decomposition) in capturing salient features of pulse-like accelerograms. The significantly higher structural demands posed by PLGMs in comparison with similar intensity pulse-free motions led to comprehensive investigations in order to mitigate the damage experienced in the affected areas, such as those located near seismic faults. In this regard, time-frequency analysis methods are frequently employed for the analysis of signals recorded during these events, due to their adaptability to the specific evolutionary behaviour. Alongside with characterization, stochastic modelling of PLGMs is of interest since it allows for systematic variations of the input parameters in order to enhance the understanding of their influence on the structural behaviour. This is particularly useful since only a limited number of PLGMs are available in the existing earthquake databases. Accordingly, inspired by the time-frequency distribution of their total energy, a versatile PLGM model is defined as a combination of amplitude-modulated stochastic processes. Each process models the time-varying distribution of the energy for adjacent frequency ranges. Two alternative formulations are proposed for representing the low-frequency content characterizing the pulses. Considering a set of pulses from the literature, numerical results show that the pulse models‟ parameters can be calibrated to simulate in average the structural impact of these pulses represented using the model herein defined. Further, the capability of the PLGM model to generate elastic and inelastic spectral responses matching a given field recorded accelerogram in the mean sense is illustrated. The applicability of the proposed model to account for near-fault effects to spectrum compatible representations of the seismic action is illustrated by generating a fully stochastic process compatible with the response spectrum of the European aseismic code (EC8). Furthermore, the model can be employed in various applications including generation of accelerograms for nonlinear dynamic analyses of structures, probabilistic seismic demand analyses or as input in stochastic dynamic techniques such as statistical linearization. Finally, the capability of several time-frequency analysis methods to characterize PLGM accelerograms is evaluated through comparative numerical studies within a novel methodology, namely by considering artificial time-histories as samples of the proposed model. The results highlight the potential of the S-transform to be used for pulse identification/extraction and of the harmonic wavelet transform for record characterization/pulse extraction. Additionally, they confirm that from an engineering perspective the structural natural period is an appropriate and representative parameter for the definition of “pulses”. Overall, these analyses shed light into the challenges experienced when attempting to detect the pulse content in the accelerograms, in an effort to inform best practices for PLGMs characterization.
7

Context-aware attribute-based techniques for data security and access control in mobile cloud environment

Li, F. January 2015 (has links)
The explosive growth of mobile applications and Cloud computing has enabled smart mobile devices to host various Cloud-based services such as Google apps, Instagram, and Facebook. Recent developments in smart devices‟ hardware and software provide seamless interaction between the users and devices. As a result, in contrast to the traditional user, the mobile user in mobile Cloud environment generates a large volume of data which can be easily collected by mobile Cloud service providers. However, the users do not know the exact physical location of their personal data. Hence, the users cannot control over their data once it is stored in the Cloud. This thesis investigates security and privacy issues in such mobile Cloud environments and presents new user-centric access control techniques tailored for the mobile Cloud environments. Most of the work to date has tried to address the data security issues on the Cloud server and only little attention has been given to protect the users‟ data privacy. One way to address the privacy issues is to deploy access control technique such as Extensible Access Control Markup Language (XACML) to control data access on users‟ data. XACML defines a standard of access control policies, rule obligations and conditions in data access control. XACML utilizes Extensible Markup Language (XML) schema to define attributes of data requesters, resources, and environment in order to evaluate access requests. A user-centric attribute-based access control model using XACML which enables users to define privacy access policies over the personal data based on their preferences is presented. In order to integrate the data security and user‟s privacy in mobile Cloud environment, the thesis investigates attribute-based encryption (ABE) scheme. ABE scheme enables data owners to enforce access policies during the encryption. Context-related attributes such as requester‟s location and behavior are incorporated within ABE scheme to provide data security and user privacy. This will enable the mobile data owners to dynamically control the access to their data at runtime. In order to improve the performance, a solution that offloads the high-cost computational work and communications from the mobile device to the Cloud is proposed. Anonymisation techniques are applied in the key issuing protocol so that the users‟ identities are protected from being tracked by the service providers during transactions. The proposed schemes are secure from known attacks and hence suitable for mobile Cloud environment. Security of the proposed schemes is formally analyzed using standard methods.
8

The approximate Determinantal Assignment Problem

Petroulakis, G. January 2015 (has links)
The Determinantal Assignment Problem (DAP) is one of the central problems of Algebraic Control Theory and refers to solving a system of non-linear algebraic equations to place the critical frequencies of the system to specied locations. This problem is decomposed into a linear and a multi-linear subproblem and the solvability of the problem is reduced to an intersection of a linear variety with the Grassmann variety. The linear subproblem can be solved with standard methods of linear algebra, whereas the intersection problem is a problem within the area of algebraic geometry. One of the methods to deal with this problem is to solve the linear problem and then and which element of this linear space is closer - in terms of a metric - to the Grassmann variety. If the distance is zero then a solution for the intersection problem is found, otherwise we get an approximate solution for the problem, which is referred to as the approximate DAP. In this thesis we examine the second case by introducing a number of new tools for the calculation of the minimum distance of a given parametrized multi-vector that describes the linear variety implied by the linear subproblem, from the Grassmann variety as well as the decomposable vector that realizes this least distance, using constrained optimization techniques and other alternative methods, such as the SVD properties of the so called Grassmann matrix, polar decompositions and mother tools. Furthermore, we give a number of new conditions for the appropriate nature of the approximate polynomials which are implied by the approximate solutions based on stability radius results. The approximate DAP problem is completely solved in the 2-dimensional case by examining uniqueness and non-uniqueness (degeneracy) issues of the decompositions, expansions to constrained minimization over more general varieties than the original ones (Generalized Grassmann varieties), derivation of new inequalities that provide closed-form non-algorithmic results and new stability radii criteria that test if the polynomial implied by the approximate solution lies within the stability domain of the initial polynomial. All results are compared with the ones that already exist in the respective literature, as well as with the results obtained by Algebraic Geometry Toolboxes, e.g., Macaulay 2. For numerical implementations, we examine under which conditions certain manifold constrained algorithms, such as Newton's method for optimization on manifolds, could be adopted to DAP and we present a new algorithm which is ideal for DAP approximations. For higher dimensions, the approximate solution is obtained via a new algorithm that decomposes the parametric tensor which is derived by the system of linear equations we mentioned before.
9

Influence of attachment line flow on form drag

Gowree, Erwin R. January 2014 (has links)
Numerical analysis conducted using Callisto, which is Airbus’s three-dimensional momentum integral boundary layer code coupled with Green’s lag-entrainment method has shown that there might be a small but worthwhile form drag reduction through attachment line control, up to about 0:4-0.6 counts for an aircraft. However, in order to overcome numerical issues in the modelling a few approximations have been made in the method while calculating the flow very near the leading ledge. The detail of the leading edge flow needs to be verified if the drag results are to be trusted. Therefore, an experiment carried out, aiming to capture the velocity profiles starting from the attachment line and up to about 3% chord downstream. In order to design the experimental model, a systematic approach was used based on previous semi-empirical work on the attachment line flow. The model was designed so that the attachment line boundary layer is turbulent due to contamination from the turbulent boundary layer from the wall (floor) of the wind tunnel and thick enough to give a sensible experimental domain size including sufficient chord wise extent for hot-wire measurement. The velocity profiles were captured by means of hot wire anemometry using a micro displacement traverse designed and manufactured in-house.
10

Utility applications of smart online energy systems : a case for investing in online power electronics

Elsayed, Hatim Ibrahim January 2014 (has links)
The backbone of any power grid, the transmission and sub-transmission networks, should be flexible, robust, resilient and self-healing to cope with wide types of network adverse conditions and operations. Power electronic applications are making a major impact on the present and future state of power systems generation, transmission and distribution. These applications include FACTS (Flexible Alternating Current Transmission), HVDC (High Voltage Direct Current) in transmission and Custom Power devices in distribution. FACTS devices are some of the advanced assets that network planners can use to make the transmission grid become more flexible and robust. Many established research ideas to advance operations of these devices have been published in the open literature over the last ten years. The most recent publications in this field are reviewed in this thesis. A critical analysis of literature and existing conditions reveals a range of potentials that are ideal for development in Qatar’s increasingly strained electricity network. As a result of demand surge in Qatar in recent years and the forecast to grow in the same rate, the need for improvement in Qatar Power Transmission System (QPTS) is great and significant. Conventional planning and operational solutions such as conductor up-rating, and fixed series capacitors (FSC) are considered. However there are growing challenges on getting new rights of ways for new overhead lines and even corridors for new cables. Advanced FACTS devices are considered for dynamic control of power flows and voltages, such as TCSC (Thyristor Controlled Series Capacitor) and GUPFC (Generalized, Unified Power Flow Controller). The research in this thesis examines the potential for QPTS to improve and develop, with emphasis on increased output through integrated online energy systems, online FACTS and HVDC controllers based on synchrophasor measurements. The devices are modelled in Siemens PTI’s PSS®E software, through steady-state mode case study to investigate power flow control and voltage support. Comparison between similar FACTS technologies, such as SVC and STATCOM, is also presented. The improvement in power flow imbalance between transmission lines with different ratings and lengths is studied. The FACTS devices are tested for voltage support to enhance the network voltage profile and hence increase security and reliability to important industrial customers. Optimization techniques of the FACTS devices allocation and rating are generally discussed considering the voltage improvement and optimal power flow control. The results achieved showing the network improvement with using the FACTS are presented in the case studies. In a separate case study, applying medium voltage custom power devices to convert DC battery storage and photovoltaic energy into AC energy using a power conversion system is discussed. The dynamic mode of the STATCOM is modelled in QPTS in the succeeding case study using the same software and compared with the capacitor banks. This is followed by another case of HVDC analysis modelled with and without STATCOM present. The thesis discussed the real time operation and control of power system physical parameters in QPTS using capacitors, FACTS and HVDC. The key contribution of this thesis is the application and resting of all sorts of FACTS and HVDC in QPTS. The system wide area, coordinated control of FACTS (Online Power Electronics-OPE) is a new concept. Another major contribution is being able to look at a system wide approach for a transmission smart grid application. The results of thesis are presented in international conferences in USA, Hong Kong, France, Portugal, and locally in the Arabian Gulf (Dubai, Oman and Qatar). The thesis’s papers are listed in the ‘References’ section and in Appendix-F.

Page generated in 0.1379 seconds