471 |
Analysis of voiceprint and other biometrics for criminological and security applicationsHosseyndoust Foomany, Farbod January 2010 (has links)
This Thesis examines the role and limitations of voice biometrics in the contexts of security and for crime reduction. The main thrust of the Thesis is that despite the technical and non-technical hurdles that this research has identified and sought to overcome, voice can be an effective and sustainable biometric if used in the manner proposed here. It is contended that focused and continuous evaluation of the strength of systems within a solid framework is essential to the development and application of voice biometrics and that special attention needs to be paid to human dimensions in system design and prior to deployment. Through an interdisciplinary approach towards the theme reflected in the title several scenarios are presented of the use of voice in security / crime reduction, crime investigation, forensics and surveillance contexts together with issues surrounding their development and implementation. With a greater emphasis on security-oriented voice verification (due to the diversity of the usage scenarios and prospect of use) a new framework is presented for analysis of the reliability and security of voice verification. This research calls not only for a standard evaluation scheme and analytical framework but also takes active steps to evaluate the prototype system within the framework under various conditions. Spoof attacks, noises, coding, distance and channel effects are among the factors that are studied. Moreover, an additional under-researched area, the detection of counterfeit signals, is also explored. While numerous technical and design contributions made in this project are summarised in chapter 2, the research mainly aims to provide solid answers to the high-level strategic questions. The Thesis culminates in a synthesis chapter in which realistic expectations, design requirements and technical limitations of the use of voice for criminological and security applications are outlined and areas for further research are defined.
|
472 |
Dynamic web services compositionMustafa, Faisal January 2014 (has links)
Emerging web services technology has introduced the concept of autonomic interoperability and portability between services. The number of online services has increased dramatically with many duplicating similar functionality and results. Composing online services to solve user needs is a growing area of research. This entails designing systems which can discover participating services and integrate these according to the end user requirements. This thesis proposes a Dynamic Web Services Composition (DWSC) process that is based upon consideration of previously successful attempts in this area, in particular utilizing AI-planning based solutions. It proposes a unique approach for service selection and dynamic web service composition by exploring the possibility of semantic web usability and its limitations. It also proposes a design architecture called Optimal Synthesis Plan Generation framework (OSPG), which supports the composition process through the evaluation of all available solutions (including all participating single and composite services). OSPG is designed to take into account user preferences, which supports optimality and robustness of the output plan. The implementation of OSPG will be con�gured and tested via division of search criteria in di�erent modes thereby locating the best plan for the user. The services composition and discovery-based model is evaluated via considering a range of criteria, such as scope, correctness, scalability and versatility metrics.
|
473 |
Microelectronic implementation of dicode PPM system employing RS codesAl-Nedawe, Basman M. January 2014 (has links)
Optical fibre systems have played a key role in making possible the extraordinary growth in world-wide communications that has occurred in the last 25 years, and are vital in enabling the proliferating use of the Internet. Its high bandwidth capabilities, low attenuation characteristics, low cost, and immunity from the many disturbances that can afflict electrical wires and wireless communication links make it ideal for gigabit transmission and a major building block in the telecommunication infrastructure. A number of different techniques are used for the transmission of digital information between the transmitter and receiver sides in optical fibre system. One type of coding scheme is Pulse Position Modulation (PPM) in which the location of one pulse during 2M time slots is used to convey digital information from M bits. Although all the studies refer to advantages of PPM, it comes at a cost of large bandwidth and a complicated implementation. Therefore, variant PPM schemes have been proposed to transmit the data such as: Multiple Pulse Position Modulation (MPPM), Differential Pulse Position Modulation (DPPM), Pulse Interval Modulation (PIM), Digital Pulse Interval Modulation (DPIM), Dual Header Pulse Interval Modulation (DH-PIM), Dicode Pulse Position Modulation (DiPPM). The DiPPM scheme has been considered as a solution for the bandwidth consumption issue that other existing PPM formats suffer from. This is because it has a line rate that is twice that of the original data rate. DiPPM can be efficiently implemented as it employs two slots to transmit one bit of pulse code modulation (PCM). A PCM conversion from logic zero to logic one provides a pulse in slot RESET (R) and from one to zero provides a pulse in slot SET (S). No pulse is transmitted if the PCM data is unvarying. Like other PPM schemes, DiPPM suffers from three types of pulse detection errors wrong slot, false alarm, and erasure. The aim of this work was to build an error correction system, Reed Solomon (RS) code, which would overcome or reduce the error sources in the DiPPM system. An original mathematical program was developed using the Mathcad software to find the optimum RS parameters which can improve the DiPPM system error performance, number of photons and transmission efficiency. The results showed that the DiPPM system employing RS code offered an improvement over uncoded DiPPM of 5.12 dB, when RS operating at the optimum code rate of approximately ¾ and a codeword length of 25 symbols. Moreover, the error performance of the uncoded DiPPM is compared with the DiPPM system employing maximum likelihood sequence detector (MLSD), and RS code in terms of number of photons per pulse, transmission efficiency, and bandwidth expansion. The DiPPM with RS code offers superior performance compared to the uncoded DiPPM and DiPPM using MLSD, requiring only 4.5x103 photons per pulse when operating at a bandwidth equal to or above 0.9 times the original data rate. Further investigation took place on the DiPPM system employing RS code. A Matlab program and very high speed circuit Hardware Description language (VHDL) were developed to simulate the designed communication system. Simulation results were considered and agreed with the previous DiPPM theory. For the first time, this thesis presents the practical implementation for the DiPPM system employing RS code using Field Programmable Gate Array (FPGA).
|
474 |
Development of advanced creep damage constitutive equations for low CR alloy under long-term serviceXu, Qihua January 2016 (has links)
Low Cr alloys are mostly utilized in structural components such as steam pipes, turbine generators and reactor pumps operating at high temperatures from 400℃ to 700℃ in nuclear power plants. For safe operation, it is necessary at the design stage to predict and understand the creep damage behaviour of low Cr alloys under long-term service conditions but under low stress levels. Laboratory creep tests can be utilized in the investigation of creep damage behaviour, however, these are usually expensive and time-consuming. Thus, constitutive modelling is considered here for both time and economic efficiency. Existing constitutive equations for describing creep are mostly proposed based on experimental data for materials under high stresses. For low stress levels, the computational determination of a current state is extrapolated from those constitutive equations by simply using a powerlaw or sinh law. However, experimental observation has shown that this method is not satisfactory. The aim of the current research is to utilize continuum damage mechanics (CDM) to improve the constitutive equations for low Cr alloys under long-term service. This project provides three main contributions. The first is a more accurate depiction of the relationship between minimum creep rate and stress levels. The predicted creep rates show good agreement with creep data observed experimentally for both 2.25Cr-1Mo steel and 0.5Cr-0.5Mo- 0.25V steel creep specimens. Secondly, it gives a more comprehensive description of the relationship between creep damage and creep cavitation. The CDM approach has been used and a reasonable agreement has also been achieved between predicted creep strain and experimental data for 0.5Cr-0.5Mo-0.25V base material under the critical stress of 40MPa at 640℃. Thirdly, it proposes a more accurate creep rupture criterion in the creep damage analysis of low Cr alloys under different stress levels. Based on investigation of creep cavitation for 2.25Cr-1Mo steel, the area fraction of cavitation at rupture time obviously differs under different stress levels. This thesis contributes to computational creep damage mechanics in general and in particular to the design of a constitutive model for creep damage analysis of low Cr alloys. The proposed constitutive equations are only valid at low and intermediate stress levels. Further work needs to be undertaken when more experimental data are available.
|
475 |
Investigations into a multiplexed fibre interferometer for on-line, nanoscale, surface metrologyMartin, Haydn January 2010 (has links)
Current trends in technology are leading to a need for ever smaller and more complex featured surfaces. The techniques for manufacturing these surfaces are varied but are tied together by one limitation; the lack of useable, on-line metrology instrumentation. Current metrology methods require the removal of a workpiece for characterisation which leads to machining down-time, more intensive labour and generally presents a bottle neck for throughput. In order to establish a new method for on-line metrology at the nanoscale investigation are made into the use of optical fibre interferometry to realise a compact probe that is robust to environmental disturbance. Wavelength tuning is combined with a dispersive element to provide a moveable optical stylus that sweeps the surface. The phase variation caused by the surface topography is then analysed using phase shifting interferometry. A second interferometer is wavelength multiplexed into the optical circuit in order to track the inherent instability of the optical fibre. This is then countered using a closed loop control to servo the path lengths mechanically which additionally counters external vibration on the measurand. The overall stability is found to be limited by polarisation state evolution however. A second method is then investigated and a rapid phase shifting technique is employed in conjunction with an electro-optic phase modulator to overcome the polarisation state evolution. Closed loop servo control is realised with no mechanical movement and a step height artefact is measured. The measurement result shows good correlation with a measurement taken with a commercial white light interferometer.
|
476 |
Motor fault diagnosis using higher order statistical analysis of motor power supply parametersAlwodai, Ahmed January 2015 (has links)
Motor current signature analysis (MCSA) has been an effective method to monitor electrical machines for many years, predominantly because of its low instrumentation cost, remote implementation and comprehensive information contents. However, it has shortages of low accuracy and efficiency in resolving weak signals from incipient faults, such as detecting early stages of induction motor fault. In this thesis MCSA has been improved to accurately detect electrical and mechanical faults in the induction motor namely broken rotor bars, stator faults and motor bearing faults. Motor current signals corresponding to a healthy (baseline) and faulty condition on induction motor at different loads (zero, 25%, 50% and 75% of full load) were rearranged and the baseline current data were examined using conventional methods in frequency domain and referenced for comparison with new modulation signal bispectrum. Based on the fundamental modulation effect of the weak fault signatures, a new method based on modulation signal bispectrum (MSB) analysis is introduced to characterise the modulation and hence for accurate quantification of the signatures. This method is named as (MSB-SE). For broken rotor bar(BRB), the results show that MSB-SE suggested in this research outperforms conventional bispectrum CB significantly for all cases due its high performance of nonlinear modulation detection and random noise suppression, which demonstrates that MSB-SE is an outstanding technique whereas (CB) is inefficient for motor current signal analysis [1] . Moreover the new estimators produce more accurate results at zero, 25%, 50%, 75% of full load and under broken rotor bar, compared with power spectrum analysis. Especially it can easily separate the half BRB at a load as low as 25% from baseline where PS would not produce a correct separation. In case of stator faults, a MSB-SE is investigated to detect different severities of stator faults for both open and short circuit. It shows that MSB-SE has the capability to accurately estimate modulation degrees and suppress the random and non-modulation components. Test results show that MSB-SE has a better performance in differentiating spectrum amplitudes due to stator faults and hence produces better diagnosis performance, compared with that of power spectrum (PS). For motor bearing faults, tests were performed with three bearing conditions: baseline, outer race fault and inner race fault. Because the signals associated with faults produce small modulations to supply component and high noise levels, MSB-SE is used to detect and diagnose different motor bearing defects. The results show that bearing faults can induce detectable amplitude increases at its characteristic frequencies. MSB-SE peaks show a clear difference at these frequencies whereas the conventional power spectrum provides change evidences only at some of the frequencies. This shows that MSB has a better and reliable performance in detecting small changes from the faulty bearing for fault detection and diagnosis. In addition, the study also shows that current signals from motors with variable frequency drive controller have too much noise and it is unlikely to discriminate the small bearing fault component. This research also applies a mathematical model for the simulation of current signals under healthy and broken bars condition in order to further understand the characteristics of fault signature to ensure the methodologies used and accuracy achieved in the detection and diagnosis results. The results show that the frequency spectrum of current signal outputs from the model take the expected form with peaks at the sideband frequency and associated harmonics.
|
477 |
XML security in XML data integrity, authentication, and confidentialityLiu, Baolong January 2010 (has links)
The widely application of XML has increasingly required high security. XML security confronts some challenges that are strong relating to its features. XML data integrity needs to protect element location information and contextreferential meaning as well as data content integrity under fine-grained security situations. XML data authentication must satisfy a signing process under a dependent and independent multi-signature generation scenario. When several different sections are encrypted within the XML data, it cannot query the encrypted contents without decrypting the encrypted portions. The technologies relating to XML security demand further development. This thesis aims to improve XML security relative technologies, and make them more practicable and secure. A novel revocation information validation approach for X.509 certificate is proposed based on the XML digital signature technology. This approach reduces the complexity of XKMS or PKI systems because it eliminates the requirement for additional revocation checking from XKMS or CA. The communication burden between server and client could be alleviated. The thesis presents the context-referential integrity for XML data. An integrity solution for XML data is also proposed based on the concatenated hash function. The integrity model proposed not only ensures XML data content integrity, but also protects the structure integrity and elements’ context relationship within an XML data. If this model is integrated into XML signature technology, the signature cannot be copied to another document still keeping valid. A new series-parallel XML multi-signature scheme is proposed. The presented scheme is a mixed order specified XML multi-signature scheme according to a dependent and independent signing process. Using presented XML data integrity-checking pool to provide integrity-checking for decomposed XML data, it makes signing XPath expression practicable, rather than signing XML data itself. A new labeling scheme for encrypted XML data is presented to improve the efficiency of index information maintenance which is applied to support encrypted XML data query processing. The proposed labelling scheme makes maintenance index information more efficient, and it is easy to update XML data with decreasing the number of affected nodes to the lowest. In order to protect structural information for encrypted XML data, the encrypted nodes are removed from original XML data, and structural information is hidden. A case study is carried out to demonstrate how the proposed XML security relative approaches and schemes can be applied to satisfy fine-grained XML security in calibration certificate management.
|
478 |
The development of a flexible characterisation system for surface metrologyLan, Xiangqi January 2014 (has links)
Surface texture and its measurement are becoming more and more increasingly important in the field of high precision engineering and nano-technology. It is a significant and efficient way to predict the functional performance of an engineering product by characterising its surface. As the extraction and evaluation of surfaces features not only relies on analysis algorithms but also measurement techniques, most of surface characterisation software systems are embedded in the measurement instruments. At present, even though a series of Geometrical Product Specification (GPS) standards are released to guide the procedure of surface characterisation, there are no software systems give a fully support of them. As a consequence, evaluation results from different systems are incomparable, even worse, conflict with each other. Surface characterisation system needs to be updated constantly with the emergence of new algorithms and methods. However, the lack of good extensibility, reusability and maintainability is a serious obstacle to the innovation of existing surface characterisation systems. As functional modules in current surface characterisation systems are tightly coupled together, it is not conducive to the reuse of function modules and innovation of overall system. A lot of redundant and duplicate works have been raised in either enhancing present characterisation systems or building a new characterisation system. To improve the reusability of function modules and facilitate system extension, this research aims to establish a flexible surface characterisation system with an open architecture. By employing component based development technologies, the overall characterisation system is constructed by gluing various functional components together instead of being created from scratch. Each analysis algorithm or method is implemented as an independent functional component, which is separated from the system framework. And also it can be easily reused by other characterisation systems as an executable chunk. Any system functional components can be developed or maintained independently by different organisation as long as they comply with predefined protocols (interfaces). This thesis proposed a novel surface characterisation system, which can be reconfigured as end users‘ expectation at any time even when it has been installed and deployed already. Functional components can be added, removed or replaced dynamically without affecting other parts of the system. Furthermore, the system is flexible such that researchers and developers can concentrate on characterisation algorithms and methods themselves, and then develop their own functional components which can be easily added to this system.
|
479 |
Investigation and implementation of dicode pulse position modulation over indoor visible light communication systemBuhafa, Adel Mohamed January 2015 (has links)
A visible light communication (VLC) system with green technology is available and enables users to use white LEDs for illumination as well as for high data rate transmission over wireless optical links. In addition, LEDs have advantages of low power consumption, high speed with power efficiency and low cost. Therefore, a great deal of research is considered for indoor VLC, as it offers huge bandwidth whilst using a significant modulation technique. This thesis is concerned with the investigation and implementation of the dicode pulse position modulation (DiPPM) scheme over a VLC link using white LED sources. Novel work is carried out for applying DiPPM over a VLC channel theoretically and experimentally including a comparison with digital PPM (DPPM) in order to examine the system performance. Moreover, a proposal of variable DiPPM (VDiPPM) is presented in this thesis for dimming control. The indoor VLC channel characteristics have been investigated for two propagation prototypes. Two models have been proposed and developed with DiPPM and DPPM being applied over the VLC channel. A computer simulation for the proposed models for both DiPPM and DPPM systems is performed in order to analyse the receiver sensitivity with the effect of intersymbol interference (ISI). Both systems are operating at 100 Mbps and 1 Gbps for a BER of 10-9. An improvement in sensitivity being achieved by the DiPPM compared to the DPPM VLC system. The system performance has been carried out by Mathcad software. The predicted DiPPM receiver sensitivity outperforms DPPM receiver at by -5.55 dBm and -8.24 dBm, at 1 Gbps data rate, and by -5.53 dBm and -8.22 dBm, at 100 Mbps, without and with guard intervals, respectively. In both cases the optical receiver sensitivity is increased when the ISI is ignored. These results based on the received optical power required by each modulation scheme. Further work has been done in mathematical evaluation carried out to calculate the optical receiver sensitivity to verify the comparison between the two systems. The original numerical results show that DiPPM VLC system provides a better sensitivity than a DPPM VLC system at a selected BER of 10-9 when referred to the same preamplifier at wavelength of 650 nm and based on the equivalent input noise current generated by the optical front end receiver. The results show that the predicted sensitivity for DPPM is greater than that of DPPM by about 1 dBm when both systems operating at 100 Mbps and 1 Gbps. Also, it is show that the receiver sensitivity is increased when the ISI is limited. Experimentally, a complete indoor VLC system has been designed and implemented using Quartus II 11.1 software for generating VHDL codes and using FPGA development board (Cyclone IV GX) as main interface real-time transmission unit in this system. The white LEDs chip based transmitter and optical receiver have been constructed and tested. The measurements are performed by using LED white light as an optical transmitter faced to photodiode optical receiver on desk. Due to the LED bandwidth limitation the achieved operating data rate, using high speed LED driver, is 5.5 Mbps at BER of 10-7. The original results for the measurements determined that the average photodiode current produced by using DiPPM and DPPM optical receivers are 8.50 μA and 10.22 μA, respectively. And this in turn indicates that the DiPPM receiver can give a better sensitivity of -17.24 dBm while compared to the DPPM receiver which gives is -16.44 dBm. The original practical results proved the simulation and theoretical results where higher performance is achieved when a DiPPM scheme is used compared to DPPM scheme over an indoor VLC system.
|
480 |
The structure of a general type of inverse problem in metrologyDing, Hao January 2016 (has links)
Inverse problems are ubiquitous in science. The theory and techniques of inverse problems play important roles in metrology owing to the close relation between inverse problems and indirect measurements. However, the essential connection between the concepts of inverse problems and measurement has not been deeply discussed before. This thesis is focused on a general type of inverse problem in metrology that arises naturally in indirect measurements, called the inverse problem of measurement (IPM). Based on the representational theory of measurement, a deterministic model of indirect measurements is developed, which shows that the IPM can be taken as an inference process of an indirect measurement and defined as the inference of the values of the measurand from the observations of some other quantity(s). The desired properties of solving the IPMs are listed and investigated in detail. The importance of estimating empirical relations is emphasised. Based on the desired properties, some structural properties of the IPMs are derived using category theory and order theory. Thereby, it is demonstrated that the structure of the IPMs can be characterised by a notion in order theory, called ‘Galois connection’. The deterministic model of indirect measurements is generalised to a probabilistic model by considering the effects of measurement uncertainty and intrinsic uncertainty. The propagation of uncertainty from the observed data to the values of measurands is investigated using a method of covariance matrices and a Bayesian method. The methods of estimating empirical relations with probability assigned using the solutions of IPM are discussed in two different approaches: the coverage interval approach and the random variable approach. For estimating empirical relations and determining the conformity of measurement results in indirect measurements, a strategy of estimating the empirical relations with high resolution is developed which significantly reduced the effect of measurement uncertainty; a method of estimating specification uncertainty is proposed for evaluating the intrinsic uncertainties of measurands; the impact of model resolution on the specifications of the indirectly measured quantities is discussed via a contradiction in the specifications of surface profiles.
|
Page generated in 0.0428 seconds