Optical fibre sensors for monitoring prestressed concrete structures in nuclear power plantsPerry, Marcus K. A. January 2013 (has links)
Lifetime extensions of nuclear fission reactors in the UK are required to satisfy growing demands for electrical power. Many of these reactors are nearing the end of their original design life, so the continued structural integrity, particularly of the reactors' prestressed concrete pressure vessels and containments is of prime concern. Currently, a lift-off inspection of a 1 % random sample of prestressing tendons is performed at 18 month to 5 year intervals to ensure adequate prestress is present in these structures, but the extended life times are making higher resolution, more frequent and in-depth monitoring techniques more desirable. In this thesis, a method of instrumenting prestressing strands with optical fibre Bragg grating strain sensors is outlined. An all-metal encapsulation and bonding technique is developed to ensure sensor reliability under the radioactive and high-stress environments of fission reactors. This 'smart strand' is complemented by a specially developed interrogation scheme capable of continuously and automatically monitoring static and dynamic nanoscale changes in Bragg grating strain. High-resolution interrogation was achieved by extending an interferrometric demodulation technique into the static measurement regime. By modulating the strain sensitivity using a fast optical switch, strain signals could be recovered independently of noise sources using various signal processing algorithms. The application of this technology could augment the continued monitoring of concrete vessel integrity, reducing both the risks and costs associated with performing lift-off measurements in the current and next generation of nuclear reactors.
Clinical evaluation of the HINS-light EDS for the continuation light based decontamination of the burns unit inpatient and outpatient settingsBache, Sarah E. January 2013 (has links)
The consequences of sustaining a burn are potentially devastating to a person. Even a relatively smooth recovery from a burn injury can be traumatic to both the body and mind of the patient. Complications, such as infection, only serve to augment this traumatic period through prolonging recovery and worsening outcome. Notwithstanding the great advances in burn treatment made during the last half century, the presence of infection has remained a major influence in dictating the path of recovery for an individual. In fact, advances in resuscitation, surgery, and intensive care support have only served to emphasise the role played by infection. Patients with even severe burns are now surviving their initial injury and remaining in hospital for prolonged periods of rehabilitation. Coupled with a worldwide increase in multi-drug resistant bacteria, and an endemic overuse of increasingly complex regimens of antibacterials, the threat from nosocomial pathogens is greater than ever. As bacteria become increasingly resistant to antibiotics, novel bactericidal technologies must be explored. Furthermore, emphasis has shifted from treatment to prevention, specifically prevention of cross-contamination between patients. The High-Intensity Narrow-Spectrum light Environmental Decontamination System (HINS-light EDS) is one such weapon in the armamentarium against cross-infection. It works using a safe blue light to kill bacteria in the air and on surfaces around patients and staff. When considering the setting for the first clinical trials of the effectiveness of this light, no area was considered to be more appropriate than the burns unit, due to the high density and great significance of bacteria in this unique environment. This thesis has not just examined the HINS-light EDS. It has taken a holistic view through considering every step of the route by which one nosocomial strain of bacteria is passed from burns patient to burns patient: the cycle of cross-contamination. Every step in this cycle has been examined in order to determine when the HINS-light EDS could have its maximum efficiency. This has been coupled with extensive clinical studies of the HINS-light EDS in a variety of inpatient and outpatient scenarios in the burns unit, to determine the optimal utilisation of this technology and achieve maximum impact on bacterial populations in the environment.
Doppler filtering and detection strategies for multifunction radarDavidson, Glen January 2001 (has links)
This thesis concerns the analysis and processing of sea clutter from a Multiband Pulsed Radar - a land based research system operated by the British Defence Evaluation and Research Agency. This radar serves as a model for a class of Multi Function Radars (MFR) that offer extensive computer controlled adaptive operation. A fast Sequential Edge Detector (SED) is formulated which, accounting for locally exponential speckle, allows the spatial inhomogeneity within a scene to be segmented. This simultaneously identifies high intensity areas and the noise dominated shadowed regions of the scene using an adaptively sized analysis window. The high resolution data is thus shown to contain discrete scatterers which exist in addition to the compound modulation from the wave surface. The discrete component means the measured statistics cannot be considered homogenous or stationary. This is crucial for high resolution MFR as a priori information can no longer be relied upon when viewing a scene for the first time in order to make a detection decision. Considering the returns to be discrete in nature leads to a potential Doppler detection scheme operable at low velocities within the clutter spectrum. A physically motivated test statistic, termed persistence, is demonstrated based upon the lifetime of scattering events determined via the Continuous Wavelet Transform. When operated in coastal regions at low resolution, strong returns from the land-sea interface (edges) are expected which will seriously degrade the performance of radar detection models tuned to homogenous scenes. Explicit operational bounds are determined for the strength of these edges which show that simultaneous operation of an edge detector is required when assessing compound statistics such as the K-distribution using typical texture estimators. Additionally a method for accurately determining the N-sum PDF of K-distributed statistics within noise is constructed using a numerical inverse Laplace transform. The SED is also applied to Synthetic Aperture Sonar data to detect the large shadows cast by targets rather than their point intensity.
Probabilistic grid scheduling : based on job statistics and monitoring informationLazarevic, Aleksandar January 2005 (has links)
This transfer thesis presents a novel, probabilistic approach to scheduling applications on computational Grids based on their historical behaviour, current state of the Grid and predictions of the future execution times and resource utilisation of such applications. The work lays a foundation for enabling a more intuitive, user-friendly and effective scheduling technique termed deadline scheduling. Initial work has established motivation and requirements for a more efficient Grid scheduler, able to adaptively handle dynamic nature of the Grid resources and submitted workload. Preliminary scheduler research identified the need for a detailed monitoring of Grid resources on the process level, and for a tool to simulate non-deterministic behaviour and statistical properties of Grid applications. A simulation tool, GridLoader, has been developed to enable modelling of application loads similar to a number of typical Grid applications. GridLoader is able to simulate CPU utilisation, memory allocation and network transfers according to limits set through command line parameters or a configuration file. Its specific strength is in achieving set resource utilisation targets in a probabilistic manner, thus creating a dynamic environment, suitable for testing the scheduler’s adaptability and its prediction algorithm. To enable highly granular monitoring of Grid applications, a monitoring framework based on the Ganglia Toolkit was developed and tested. The suite is able to collect resource usage information of individual Grid applications, integrate it into standard XML based information flow, provide visualisation through a Web portal, and export data into a format suitable for off-line analysis. The thesis also presents initial investigation of the utilisation of University College London Central Computing Cluster facility running Sun Grid Engine middleware. Feasibility of basic prediction concepts based on the historical information and process meta-data have been successfully established and possible scheduling improvements using such predictions identified. The thesis is structured as follows: Section 1 introduces Grid computing and its major concepts; Section 2 presents open research issues and specific focus of the author’s research; Section 3 gives a survey of the related literature, schedulers, monitoring tools and simulation packages; Section 4 presents the platform for author’s work – the Self-Organising Grid Resource management project; Sections 5 and 6 give detailed accounts of the monitoring framework and simulation tool developed; Section 7 presents the initial data analysis while Section 8.4 concludes the thesis with appendices and references.
Topology control and data handling in wireless sensor networksShum, L. L. January 2009 (has links)
Our work in this thesis have provided two distinctive contributions to WSNs in the areas of data handling and topology control. In the area of data handling, we have demonstrated a solution to improve the power efficiency whilst preserving the important data features by data compression and the use of an adaptive sampling strategy, which are applicable to the specific application for oceanography monitoring required by the SECOAS project. Our work on oceanographic data analysis is important for the understanding of the data we are dealing with, such that suitable strategies can be deployed and system performance can be analysed. The Basic Adaptive Sampling Scheduler (BASS) algorithm uses the statistics of the data to adjust the sampling behaviour in a sensor node according to the environment in order to conserve energy and minimise detection delay. The motivation of topology control (TC) is to maintain the connectivity of the network, to reduce node degree to ease congestion in a collision-based medium access scheme; and to reduce power consumption in the sensor nodes. We have developed an algorithm Subgraph Topology Control (STC) that is distributed and does not require additional equipment to be implemented on the SECOAS nodes. STC uses a metric called subgraph number, which measures the 2-hops connectivity in the neighbourhood of a node. It is found that STC consistently forms topologies that have lower node degrees and higher probabilities of connectivity, as compared to k-Neighbours, an alternative algorithm that does not rely on special hardware on sensor node. Moreover, STC also gives better results in terms of the minimum degree in the network, which implies that the network structure is more robust to a single point of failure. As STC is an iterative algorithm, it is very scalable and adaptive and is well suited for the SECOAS applications.
Electrical properties of diamond nanostructuresBevilacqua, M. January 2010 (has links)
Nanocrystalline diamond films (NCD) can potentially be used in a large variety of applications such as electrochemical electrodes, tribology, cold cathodes, and corrosion resistance. A thorough knowledge of the electrical properties of NCD films is therefore critical to understand and predict their performance in various applications. In the present work the electrical properties of NCD films were analysed using Impedance Spectroscopy and Hall Effect measurements. Impedance Spectroscopy permits to identify and single out the conduction paths within the films tested. Such conduction paths can be through grain interiors and/or grain boundaries. Hall measurements, carried out on Borondoped NCD, permits determination of the mobility of the films. Specific treatments were devised to enhance the properties of the NCD films studied. Detonation nanodiamond (DND) is becoming an increasingly interesting material. It is already used as abrasive material or component for coatings , but its potential applications can extend far beyond these. It is therefore essential to understand the structure and electrical properties of DND in order to exploit the full potential of this material. In the present work, electrical properties of DND were studied using Impedance Spectroscopy. The results obtained suggest that DND could be used to manufacture devices able to work as Ammonia detectors. Another major area of study in this work was ultra-violet diamond photodetectors. Using high quality CVD single-crystal diamond, UV photodetection devices were built using standard lithographic techniques. Following the application of heat treatments, the photoconductive properties of these devices were highly enhanced. The devices represent the state-of-the-art UV diamond photodetectors.
Wireless D&F relay channels : time allocation strategies for cooperation and optimum operationElsheikh, E. M. A. January 2010 (has links)
Transmission over the wireless medium is a challenge compared to its wired counterpart. Scarcity of spectrum, rapid degradation of signal power over distance, interference from neighboring nodes and random behavior of the channel are some of the difficulties with which a wireless system designer has to deal. Moreover, emerging wireless networks assume mobile users with limited or no infrastructure. Since its early application, relaying offered a practical solution to some of these challenges. Recently, interest on the relay channel is revived by the work on user-cooperative communications. Latest studies aim to re-employ the channel to serve modern wireless networks. In this work, the decode-and-forward (D&F) relay channel with half-duplex constraint on the relay is studied. Focus is on producing analytical results for the half-duplex D&F relay channel with more attention given to time allocation. First, an expression for the mutual information for the channel with arbitrary time allocation is developed. Introduction of the concept of conversion point explains some of the channel behavior and help in classifying the channel into suppressed and unsuppressed types. In the case of Rayleigh fading, cumulative distribution function (cdf) and probability density function (pdf) are evaluated for the mutual information. Consequently, expressions for average mutual information and outage probability are obtained. Optimal operation of the channel is investigated. Optimal time allocation for maximum mutual information and optimal time allocation for minimum total transmission time are worked out for the case of channel state information at transmitter (CSIT). Results revealed important duality between optimization problems. Results obtained are extended from a two-hop channel to any number of hops. Only sequential transmission is considered. A cooperative scheme is also developed based on the three-node relay channel. A two-user network is used as a prototype for a multi-user cooperative system. Based on the model assumed, an algorithm for partner selection is developed. Simulation results showed advantages of cooperation for individual users as well as the overall performance of the network.
Improving forwarding mechanisms for mobile personal area networksAli, R. January 2011 (has links)
This thesis presents novel methods for improving forwarding mechanisms for personal area networks. Personal area networks are formed by interconnecting personal devices such as personal digital assistants, portable multimedia devices, digital cameras and laptop computers, in an ad hoc fashion. These devices are typically characterised by low complexity hardware, low memory and are usually batterypowered. Protocols and mechanisms developed for general ad hoc networking cannot be directly applied to personal area networks as they are not optimised to suit their specific constraints. The work presented herein proposes solutions for improving error control and routing over personal area networks, which are very important ingredients to the good functioning of the network. The proposed Packet Error Correction (PEC) technique resends only a subset of the transmitted packets, thereby reducing the overhead, while ensuring improved error rates. PEC adapts the number of re-transmissible packets to the conditions of the channel so that unnecessary retransmissions are avoided. It is shown by means of computer simulation that PEC behaves better, in terms of error reduction and overhead, than traditional error control mechanisms, which means that it is adequate for low-power personal devices. The proposed C2HR routing protocol, on the other hand, is designed such that the network lifetime is maximised. This is achieved by forwarding packets through the most energy efficient paths. C2HR is a hybrid routing protocol in the sense that it employs table-driven (proactive) as well as on-demand (reactive) components. Proactive routes are the primary routes, i.e., packets are forwarded through those paths when the network is stable; however, in case of failures, the protocol searches for alternative routes on-demand, through which data is routed temporarily. The advantage of C2HR is that data can still be forwarded even when routing is re-converging, thereby increasing the throughput. Simulation results show that the proposed routing method is more energy efficient than traditional least hops routing, and results in higher data throughput. C2HR relies on a network leader for collecting and distributing topology information, which in turn requires an estimate of the underlying topology. Thus, this thesis also proposes a new cooperative leader election algorithm and techniques for estimating network characteristics in mobile environments. The proposed solutions are simulated under various conditions and demonstrate appreciable behaviour.
Performance factors for airborne short-dwell squinted radar sensorsBeard, G. S. January 2011 (has links)
Millimetre-wave radar in a missile seeker for the engagement of ground targets allows all-weather, day and night, surface imaging and has the ability to detect, classify and geolocate objects at long ranges. The use of a seeker allows intelligent target selection and removes inaccuracies in the target position. The selection of the correct target against a cluttered background in radar imagery is a challenging problem, which is further constrained by the seeker’s hardware and flight-path. This thesis examines how to make better use of the components of radar imagery that support target selection. Image formation for a squinted radar seeker is described, followed by an approach to automatic target recognition. Size and shape information is considered using a model-matching approach that is not reliant on extensive databases of templates, but a limited set of shape-only templates to reject clutter objects. The effects of radar sensitivity on size measurements are then explored to understand seeker operation in poor weather. Size measures cannot easily be used for moving targets, where the target signature is distorted and displaced. The ability to detect, segment and measure vehicle dimensions and velocity from the shadows of moving targets is tested using real and simulated data. The choice of polarisation can affect the quality of measurements and the ability to reject clutter. Data from three different radars is examined to help to understand the performance using linear and circular polarisations. For sensors operating at shorter ranges, the application of elevation monopulse to include target height as a discriminant is tested, showing good potential on simulated data. The combination of these studies offers an insight into the performance factors that influence the design and processing of a radar seeker. The use of shadow imagery on short-dwell radar seeker imagery is an area offering particular promise.
Spectrum optimisation in wireless communication systems : technology evaluation, system design and practical implementationGrammenos, R. C. January 2013 (has links)
Two key technology enablers for next generation networks are examined in this thesis, namely Cognitive Radio (CR) and Spectrally Efficient Frequency Division Multiplexing (SEFDM). The first part proposes the use of traffic prediction in CR systems to improve the Quality of Service (QoS) for CR users. A framework is presented which allows CR users to capture a frequency slot in an idle licensed channel occupied by primary users. This is achieved by using CR to sense and select target spectrum bands combined with traffic prediction to determine the optimum channel-sensing order. The latter part of this thesis considers the design, practical implementation and performance evaluation of SEFDM. The key challenge that arises in SEFDM is the self-created interference which complicates the design of receiver architectures. Previous work has focused on the development of sophisticated detection algorithms, however, these suffer from an impractical computational complexity. Consequently, the aim of this work is two-fold; first, to reduce the complexity of existing algorithms to make them better-suited for application in the real world; second, to develop hardware prototypes to assess the feasibility of employing SEFDM in practical systems. The impact of oversampling and fixed-point effects on the performance of SEFDM is initially determined, followed by the design and implementation of linear detection techniques using Field Programmable Gate Arrays (FPGAs). The performance of these FPGA based linear receivers is evaluated in terms of throughput, resource utilisation and Bit Error Rate (BER). Finally, variants of the Sphere Decoding (SD) algorithm are investigated to ameliorate the error performance of SEFDM systems with targeted reduction in complexity. The Fixed SD (FSD) algorithm is implemented on a Digital Signal Processor (DSP) to measure its computational complexity. Modified sorting and decomposition strategies are then applied to this FSD algorithm offering trade-offs between execution speed and BER.
Page generated in 0.0273 seconds