• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

Simultaneous all-optical processing of wavelength division multiplexing channels

Provost, Lionel Andre January 2012 (has links)
In this thesis, the possibility of simultaneous all-optical regeneration of wavelengthdivision multiplexed (WDM) signals within the same optical device is investigated. The optical regeneration scheme discussed in this thesis relies on the exploitation of the SPM induced by the optical Kerr nonlinearity within an optical fibre. In the work presented in this thesis, I report the extension of a particular single-channel all-optical 2R regenerator suitable for on-off keying return-to-zero modulation format to WDM operation. The device is referred to as the Mamyshev regenerator, and provides both Re-amplification and Re-shaping capabilities for the incoming optical signal. An in-depth analysis of the single-channel device reveals that remarkable and simple scaling rules can be established to relate the output properties of the optical regenerate to the characteristics of the incoming signal to be regenerated and key physical parameters defining the optical regenerator. The analysis allows general conclusions to be drawn on the mitigation strategies to be implemented to extend the scheme to the multi-channel case. The extension to the multi-channel scenario is then examined. Minimization of the interaction time between adjacent channels is introduced by inducing a sufficient walkoff between co-propagating signals. The strength of the inter-channel nonlinearities can be sufficiently reduced to preserve the optical regeneration capabilities. Two techniques are therefore reported. One is based on the counter-propagation of two optical signals within the same piece of nonlinear fibre. The second relies on polarization multiplexing of two co-propagating signals. Theoretical aspects and experimental demonstrations at 10 Gb/s, 40 Gb/s, and 130 Gb/s are reported
742

Cross-layer operation aided wireless networks

Chen, Hong January 2010 (has links)
In this thesis, we propose several cross-layer operation aided schemes conceived for wireless networks. Cross layer design may overcome the disadvantages of the network's layered architecture, where layering is most typically represented by the Transport Control Protocol (TCP) / Internet Protocol (IP) suite. We invoke Fountain codes for protecting file transfer at the application layer, since they are suitable for erasure channels. They are also often referred to as rateless codes. When implementing Fountain code aided file transfer, the file will be firstly partitioned into a number of blocks, each of which contains K packets. Fountain codes randomly select several packets from a block and then combine them using exclusive- OR additions for generating an encoded packet. The encoding continues until all blocks are successfully received. Considering an 802.11 Wireless Local Area Network (WLAN) scenario, the packet size has to be appropriately chosen, since there exists a trade-off between the packet size and the transmission efficiency, which is defined as the number of primary information bits to the total number of all transmitted bits including headers, control packets and retransmitted replicas. In order to find the optimum packet size, the transmission efficiency is formulated as a function of the Packet Loss Ratio (PLR) at the application layer and of the total load imposed by a single packet. The PLR at the application layer is related both to the packet size, as well as to the 802.11 MAC retransmission mechanism and to the modulation scheme adopted by the physical layer. Apart from its source data, the total load imposed by an information packet also contains the control packets of the 802.11 Media Access Control (MAC) protocol such as the Request To Send (RTS) / Clear To Send (CTS) messages, the retransmitted replicas and the Acknowledgement (ACK) messages. According to these relations, the transmission effciency may finally be expressed as a function of packet size. Based on the numerical analysis of this function, the optimum packet size may be determined. Our simulation results confirmed that indeed the highest transmission efficiency may be achieved, when using the optimum packet size. Since turbo codes are capable of achieving near capacity performance, they may be successfully combined with Hybrid Automatic Repeat reQuest (HARQ) schemes. In this thesis, the classic Twin Component Turbo Codes (TCTCs) are extended to Multiple Component Turbo Codes (MCTCs). In order to apply classic two-dimensional Extrinsic Information Transfer (EXIT) charts for analyzing them, we divided an N-component MCTC into two logical parts. This partitioning was necessary, because otherwise an N-component scheme would require an N-dimensional EXIT chart. One of the parts is constituted by an individual Bahl, Cocke, Jelinek and Raviv (BCJR) decoder, while the other so-called composite decoder consists of the remaining (N-1) components. The EXIT charts visualized the extrinsic information exchange between these two logical parts of MCTCs. Aided by this partitioning technique, we may find the so-called `open tunnel SNR threshold' for MCTCs, which is defined as the minimum SNR for which the EXIT chart at the specific coding rate used has an open tunnel. It may be used as a metric to compare the achievable performance to the Discreteinput Continuous-output Memoryless Channel's (DCMC) capacity. Our simulation results showed that the achievable performance of MCTCs is closer to the DCMC capacity than that of non-systematic TCTCs, but a bit further than that of systematic TCTCs, if generator polynomials having an arbitrary memory length - and hence complexity - are considered. However, for the lowest-memory octally represented polynomial (2; 3)o, which implies having the lowest possible complexity, MCTCs outperform non-systematic and systematic TCTCs. Furthermore, MCTC aided HARQ schemes using the polynomial of (2; 3)o exhibit significantly better PLRs and throughput performances than systematic as well as non-systematic TCTC aided HARQ schemes using the same polynomial. If systematic TCTC aided HARQ schemes relying on the polynomial of (17; 15)o are used as benchmarkers, MCTC aided HARQ schemes may significantly reduce the complexity, without a substantial degradation of the PLR and throughput. When combining turbo codes with HARQ, the associated complexity becomes a critical issue, since iterative decoding is immediately activated after each transmission. In order to reduce the associated complexity, an Early Stopping (ES) strategy was proposed in this thesis to substitute the fixed number of BCJR operations invoked for each iterative decoding. By observing the EXIT charts of turbo codes, we note that the extrinsic information increases along the decoding trajectory of an open or closed tunnel. The ES aided MCTC HARQ scheme curtails iterative decoding, when the Mutual Information (MI) increase becomes less than a given threshold. This threshold was determined by an off-line training in order to achieve a trade-off between the throughput and complexity. Our simulation results verified that the complexity of MCTC aided HARQ schemes may be reduced by as much as 80%, compared to that of systematic TCTC aided HARQ schemes using a fixed number of 10 BCJR operations. Moreover, the complexity of turbo coded HARQ schemes may be further reduced by our Look-Up Table (LUT) based Deferred Iteration (DI) method. The DI method delays the iterative decoding until the receiver estimates that it has received sufficient information for successful decoding, which may be represented by the emergence of an open tunnel in the EXIT chart corresponding to all received replicas. Therefore, the specific MI that a `just' open tunnel appears when combining all previous (i-1) MIs will be the threshold that has to be satisfed by the ith reception. More specifically, if the MI received during the ith reception is higher than this threshold, the EXIT tunnel is deemed to be open and hence the iterative decoding is triggered. Otherwise, iterative decoding will be disabled when the tunnel is deemed to be closed. This reduces the complexity. The LUT stores all possible MI thresholds for N-component MCTCs, which results in a large storage requirement, if N becomes high. Hence, an efficient LUT design was also proposed in this thesis. Our simulation results demonstrated the achievable complexity reduction may be as high as 50%, compared to the schemes operating without the DI method.
743

Supporting development of Event-B models

Silva, Renato January 2012 (has links)
We believe that the task of developing large systems requires a formal approach. The complexity of these systems demands techniques and tool support to simplify the task of formal development. Often large systems are a combination of sub-components that can be seen as modules. Event-B is a formal methodology that allows the development of distributed systems. Despite several benefits of using Event-B, modularisation and reuse of existing models are not fully supported. We propose three techniques supporting the reuse of models and their respective proof obligations in order to develop specifications of large systems: composition, generic instantiation and decomposition. Such techniques are studied and tool support is defined as plug-ins by taking advantage of the extensibility features of the Event-B toolset (Rodin platform). Composition allows the combination of different sub-components and refinement is possible. A shared event approach is followed where sub-components events are composed, communicating via common parameters and without variable sharing. By reusing sub-components, proof obligations required for a valid composition are expressed and we show that composition is monotonic. A tool is developed reinforcing the conditions that allow the monotonicity and generating the respective proof obligations. Generic Instantiation allows a generic model (a machine or a refinement chain) to be instantiated into a suitable development. Generic model proof obligations are reused, avoiding re-proof and its refinement comes for free. An instantiation constructor is developed where the generic free identifiers (variables and constants) are renamed and carrier sets are replaced to fit the instance. Decomposition allows the splitting of a model into several sub-components in a shared event or shared variable style. Both styles are monotonic and sub-components can be further refined independently, allowing team development. Proof obligations of the original model are split into the different sub-components which usually results in simpler and easier to discharge proof obligations. Decomposition is supported by a practical tool permitting the use of both styles. We expect to close the gap between the use of formal methods in academia and industry. In this thesis we address the important aspect of having tools supporting well-studied formal techniques that are easy to use by model developers.
744

An investigation into partial discharge activity within three-phase belted cables

Hunter, Jack A. January 2013 (has links)
Industrially driven interest in the field of partial discharge (PD) diagnostics has rapidly increased in recent years. Utilities are turning to continuous asset monitoring methods to inform them on the real-time health of plant. The majority of London's medium voltage (MV) distribution network is constructed from paper insulated lead covered (PILC) belted cables. The vast majority of this cable was commissioned in the 60's and 70's and is now nearing the end of its design life. PD diagnostics have been proposed as a possible tool for the condition monitoring of these distribution cables. Little is known about the characteristics of the PD activity that is produced as cables of this design degrade under rated conditions. This thesis describes the development of a PD measurement experiment that records PD data from either defective or damaged three-phase MV PILC cables under rated voltage conditions. The experiment has been designed to replicate the environment experienced by cable circuits in the field. The aim was to investigate the potential transfer of knowledge generated by the experiment onto an on-line commercial operational system. An investigation into the PD produced by the various degradation mechanisms have been undertaken to evaluate the relationship between the PD source conditions and recorded signals. It has been found that the phase-resolved PD patterns produced by different degradation mechanisms are unique. Consequently, a PD source discrimination technique has been successfully applied to both experiment and field data. The algorithm relies on the finding that the wavelet energy (WE) distribution of a PD pulse is source dependent. A support vector machine (SVM) was used to accurately classify PD pulses from different sources that had been tested experimentally. The ability to accurately discriminate between different PD sources in both experiment and field data should lead to a significant step forward in the field of PD diagnostics.
745

Fault modelling and accelerated simulation of integrated circuits manufacturing defects under process variation

Zhong, Shida January 2013 (has links)
As silicon manufacturing process scales to and beyond the 65-nm node, process variation can no longer be ignored. The impact of process variation on integrated circuit performance and power has received significant research input. Variation-aware test, on the other hand, is a relatively new research area that is currently receiving attention worldwide. Research has shown that test without considering process variation may lead to loss of test quality. Fault modelling and simulation serve as a backbone of manufacturing test. This thesis is concerned with developing efficient fault modelling techniques and simulation methodologies that take into account the effect of process variation on manufacturing defects with particular emphasis on resistive bridges and resistive opens. The first contribution of this thesis addresses the problem of long computation time required to generate logic fault of resistive bridges under process variation by developing a fast and accurate modelling technique to model logic fault behaviour of resistive bridges.The new technique is implemented by employing two efficient voltage calculation algorithms to calculate the logic threshold voltage of driven gates and critical resistance of a fault-site to enable the computation of bridge logic faults without using SPICE. Simulation results show that the technique is fast (on average 53 times faster) and accurate (worst case is 2.64% error) when compared with HSPICE. The second contribution analyses the complexity of delay fault simulation of resistive bridges to reduce the computation time of delay fault when considering process variation. An accelerated delay fault simulation methodology of resistive bridges is developed by employing a three-step strategy to speed up the calculation of transient gate output voltage which is needed to accurately compute delay faults. Simulation results show that the methodology is on average 17.4 times faster, with 5.2% error in accuracy, when compared with HSPICE. The final contribution presents an accelerated simulation methodology of resistive opens to address the problem of long simulation time of delay fault when considering process variation. The methodology is implemented by using two efficient algorithms to accelerate the computation of transient gate output voltage and timing critical resistance of an open fault-site. Simulation results show that the methodology is on average up to 52 times faster than HSPICE, with 4.2% error in accuracy.
746

Improvement criteria for constraint handling and multiobjective optimization

Parr, James January 2013 (has links)
In engineering design, it is common to predict performance based on complex computer codes with long run times. These expensive evaluations can make automated and wide ranging design optimization a difficult task. This becomes even more challenging in the presence of constraints or conflicting objectives. When the design process involves expensive analysis, surrogate (response surface or meta) models can be adapted in different ways to efficiently converge towards global solutions. A popular approach involves constructing a surrogate based on some initial sample evaluated using the expensive analysis. Next, some statistical improvement criterion is searched inexpensively to find model update points that offer some design improvement or model refinement. These update points are evaluated, added to the set of initial designs and the process is repeated with the aim of converging towards the global optimum. In constrained problems, the improvement criterion is required to update the surrogate models in regions that offer both objective and constraint improvement whilst converging toward the best feasible optimum. In multiobjective problems, the aim is to update the surrogates in such a way that the evaluated points converge towards a spaced out set of Pareto solutions. This thesis investigates efficient improvement criteria to address both of these situations. This leads to the development of an improvement criterion that better balances improvement of the objective and all the constraint approximations. A goal-based approach is also developed suitable for expensive multiobjective problems. In all cases, improvement criteria are encouraged to select multiple updates, enabling designs to be evaluated in parallel, further accelerating the optimization process.
747

Decentralised coordination of smart distribution networks using message passing

Miller, Sam January 2014 (has links)
Over the coming years, distribution network operators (DNOs) face the challenge of incorporating an increased number of electrical distributed generators (DGs) into their already capacity-constrained distribution networks. To overcome this challenge will require the DNOs to use active network management techniques, which are already prevalent in the transmission network, in order to constantly monitor and coordinate these generators, whilst ensuring that the bidirectional flows they engender on the network are safe. Therefore, this thesis presents novel decentralised message passing algorithms that coordinate generators in acyclic electricity distribution networks, such that the costs (in terms of carbon dioxide (CO2) emissions) of the entire network are minimised; a technique commonly referred to as optimal dispatch. In more detail, we cast the optimal dispatch problem as a decentralised agent-based coordination problem and formalise it as a distributed constraint optimisation problem (DCOP). We show how this DCOP can be decomposed as a factor graph and solved in a decentralised manner using algorithms based on the generalised distributive law; in particular the max-sum algorithm. We go on to show that max-sum applied naively in this setting performs a large number of redundant computations. To address this issue, we present both a discrete and a continuous novel decentralised message passing algorithm that outperforms max-sum by pruning much of the search space. Our discrete version is applicable to network settings that are entirely composed of discrete generators (such as wind turbines or solar panels), and when the constraints of the electricity network have been discretised. Our continuous version can be applied to a wider range of network settings containing multiple types of generators, without the need to discretise the electricity distribution network constraints. We empirically evaluate our algorithms, using two large real electricity distribution network topologies, and show that they outperform max-sum (in terms of computational time and total size of messages sent).
748

Trust-based algorithms for fusing crowdsourced estimates of continuous quantities

Venanzi, Matteo January 2014 (has links)
Crowdsourcing has provided a viable way of gathering information at unprecedented volumes and speed by engaging individuals to perform simple micro–tasks. In particular, the crowdsourcing paradigm has been successfully applied to participatory sensing, in which the users perform sensing tasks and provide data using their mobile devices. In this way, people can help solve complex environmental sensing tasks, such as weather monitoring, nuclear radiation monitoring and cell tower mapping, in a highly decentralised and parallelised fashion. Traditionally, crowdsourcing technologies were primarily used for gathering data for classifications and image labelling tasks. In contrast, such crowd–based participatory sensing poses new challenges that relate to (i) dealing with human–reported sensor data that are available in the form of continuous estimates of an observed quantity such as a location, a temperature or a sound reading, (ii) dealing with possible spatial and temporal correlations within the data and (ii) issues of data trustworthiness due to the unknown capabilities and incentives of the participants and their devices. Solutions to these challenges need to be able to combine the data provided by multiple users to ensure the accuracy and the validity of the aggregated results. With this in mind, our goal is to provide methods to better aid the aggregation process of crowd–reported sensor estimates of continuous quantities when data are provided by individuals of varying trustworthiness. To achieve this, we develop a trust–based in- formation fusion framework that incorporates latent trustworthiness traits of the users within the data fusion process. Through this framework, we develop a set of four novel algorithms (MaxTrust, BACE, TrustGP and TrustLGCP) to compute reliable aggregations of the users’ reports in both the settings of observing a stationary quantity (Max- Trust and BACE) and a spatially distributed phenomenon (TrustGP and TrustLGCP). The key feature of all these algorithm is the ability of (i) learning the trustworthiness of each individual who provide the data and (ii) exploit this latent user’s trustworthiness information to compute a more accurate fused estimate. In particular, this is achieved by using a probabilistic framework that allows our methods to simultaneously learn the fused estimate and the users’ trustworthiness from the crowd reports. We validate our algorithms in four key application areas (cell tower mapping, WiFi networks mapping, nuclear radiation monitoring and disaster response) that demonstrate the practical impact of our framework to achieve substantially more accurate and informative predictions compared to the existing fusion methods. We expect that results of this thesis will allow to build more reliable data fusion algorithms for the broad class of human–centred information systems (e.g., recommendation systems, peer reviewing systems, student grading tools) that are based on making decisions upon subjective opinions provided by their users.
749

Trust and reputation in open multi-agent systems

Huynh, Trung Dong January 2006 (has links)
Trust and reputation are central to effective interactions in open multi-agent systems (MAS) in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. This openness means existing trust and reputation models cannot readily be used since their performance suffers when there are various (unforseen) changes in the environment. To this end, this thesis develops and evaluates FIRE, a trust and reputation model that enables autonomous agents in open MAS to evaluate the trustworthiness of their peers and to select good partners for interactions. FIRE integrates four sources of trust information under the same framework in order to provide a comprehensive assessment of an agent’s likely performance in open systems. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation, that models trust resulting from direct experiences, role-based relationships, witness reports, and third-party references, respectively, to provide trust metrics in most circumstances. A novel model of reporter credibility has also been integrated to enable FIRE to effectively deal with inaccurate reports (from witnesses and referees). Finally, adaptive techniques have been introduced, which make use of the information gained from monitoring the environment, to dynamically adjust a number of FIRE’s parameters according to the actual situation an agent finds itself in. In all cases, a systematic empirical analysis is undertaken to evaluate the effectiveness of FIRE in terms of the agent’s performance.
750

Iterative learning control : algorithm development and experimental benchmarking

Cai, Zhonglun January 2009 (has links)
This thesis concerns the general area of experimental benchmarking of Iterative Learning Control (ILC) algorithms using two experimental facilities. ILC is an approach which is suitable for applications where the same task is executed repeatedly over the necessarily finite time duration, known as the trial length. The process is reset prior to the commencement of each execution. The basic idea of ILC is to use information from previously executed trials to update the control input to be applied during the next one. The first experimental facility is a non-minimum phase electro-mechanical system and the other is a gantry robot whose basic task is to pick and place objects on a moving conveyor under synchronization and in a fixed finite time duration that replicates many tasks encountered in the process industries. Novel contributions are made in both the development of new algorithms and, especially, in the analysis of experimental results both of a single algorithm alone and also in the comparison of the relative performance of different algorithms. In the case of non-minimum phase systems, a new algorithm, named Reference Shift ILC (RSILC) is developed that is of a two loop structure. One learning loop addresses the system lag and another tackles the possibility of a large initial plant input commonly encountered when using basic iterative learning control algorithms. After basic algorithm development and simulation studies, experimental results are given to conclude that performance improvement over previously reported algorithms is reasonable. The gantry robot has been previously used to experimentally benchmark a range of simple structure ILC algorithms, such as those based on the ILC versions of the classical proportional plus derivative error actuated controllers, and some state-space based optimal ILC algorithms. Here these results are extended by the first ever detailed experimental study of the performance of stochastic ILC algorithms together with some modifications necessary to their configuration in order to increase performance. The majority of the currently reported ILC algorithms mainly focus on reducing the trial-to-trial error but it is known that this may come at the cost of poor or unacceptable performance along the trial dynamics. Control theory for discrete linear repetitive processes is used to design ILC control laws that enable the control of both trial-to-trial error convergence and along the trial dynamics. These algorithms can be computed using Linear Matrix Inequalities (LMIs) and again the results of experimental implementation on the gantry robot are given. These results are the first ever in this key area and represent a benchmark against which alternatives can be compared. In the concluding chapter, a critical overview of the results presented is given together with areas for both short and medium term further research.

Page generated in 0.0293 seconds