• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Towards a practically extensible Event-B methodology

Maamria, Issam January 2013 (has links)
Formal modelling is increasingly recognised as an important step in the development of reliable computer software. Mathematics provide a solid theoretical foundation upon which it is possible to specify and implement complex software systems. Event-B is a formalism that uses typed set theory to model and reason about complex systems. Event-B and its associated toolset, Rodin, provide a methodology that can be incorporated into the development process of software and hardware. Refinement and mathematical proof are key features of Event-B that can be exploited to rigorously specify and reason about a variety of systems. Successful and usable formal methodologies must possess certain attributes in order to appeal to end-users. Expressiveness and extensibility, among other qualities, are of major importance. In this thesis, we present techniques that enhance the extensibility of: (1) the mathematical language of Event-B in order to enhance expressiveness of the formalism, and (2) the proving infrastructure of the Rodin platform in order to cope with an extensible mathematical language. This thesis makes important contributions towards a more extensible Event-B methodology. Firstly, we show how the mathematical language of Event-B can be made extensible in a way that does not hinder the consistency of the underlying formalism. Secondly, we describe an approach whereby the prover used for reasoning can be augmented with proof rules without compromising the soundness of the framework. The theory component is the placeholder for mathematical and proof extensions. The theoretical contribution of this thesis is the study of rewriting in the presence of partiality. Finally, from a practical viewpoint, proof obligations are used to ensure soundness of user-contributed extensions.
592

Autonomous multi-agent reconfigurable control systems

Abu Bakar, Badril January 2013 (has links)
This thesis is an investigation of methods and architectures for autonomous multi-agent reconfigurable controllers. As part of the analysis two components are looked at: the fault detection and diagnosis (FDD) component and the controller reconfiguration (CR) component. The FDD component detects and diagnoses faults. The CR component on the other hand, adapts or changes the control architecture to accommodate the fault. The problem is to synchronize or integrate these two components in the overall structure of a control system. A novel approach is proposed. A multiagent architecture is used to interface between the two components. This method allows the system to be viewed as a modular structure. Three types of agent are defined. A planner agent Ap, a monitor agent Am and a control agent Ac. The monitor agent takes the role of the FDD component. The planner and control agents on the other hand take the roles of CR component. The planner decides which controller to use and passes it on to Ac. It also decides on the parameter settings of the system and changes it accordingly. It belongs to the reactive agent category. The planner agent's internal architecture maps its sensor data directly to actions using a pre-set rule based conditional logic. It was decided that this architecture would reduce the overall complexity of the system. The monitor agent Am belongs to the learning agent category. It uses an algorithm called adaptive resonance theory neural network or ART-NN to autonomously categorize system faults. Am then informs the other agents of the fault status. ART-NN was chosen due to the fact that it does not need to be trained with sample data and learns to categorize data patterns on the fly. This allows Am to detect unmodelled system faults. The control agent Ac also belongs to the learning agent category. It uses a multiagent reinforcement learning algorithm to learn a controller for the system at hand. Once a suitable controller has been learnt, the parameters of the controller are passed to Ap for it to be stored in its memory and learning is terminated. During control execution mode, controller parameters are sent to Ac from Ap. The novel approach is demonstrated on a case study. Our laboratory-built 4-wheeled skid-steering vehicle complete with sensors is designed as a way of demonstration. Several faults are simulated and the response of the demo system is analyzed.
593

Direct UV-written Bragg gratings for waveguide characterisation and advanced applications

Rogers, Helen L. January 2013 (has links)
Direct UV writing is an established fabrication technique allowing channel waveguides and photonic circuits to be defined in a photosensitive glass via an inscription method. A related technique, direct grating writing, enables Bragg grating structures to be defined in an interferometric dual beam set up, with definition of Bragg grating planes achieved via the periodic modulation of the interference pattern between the beams. A decade of prior work investigating the technique has led to devices for use in sensing, telecommunications, lasing and amplification applications. A requirement for greater understanding of the propagation characteristics of the waveguides has been identified, in order to maximise the effciency and effectiveness of these devices. In this thesis, a propagation loss measurement technique and a wavelength-dependent dispersion measurement technique are presented. Both depend on the presence of integrated Bragg grating structures which enable the propagation characteristics of the waveguides to be investigated. The loss measurement technique involves measurement of the Bragg grating strength, whilst the dispersion measurement technique enables the effective refractive index of the waveguide to be inferred from a measurement of reflected central grating wavelength. Applications of both techniques in a variety of situations have been investigated, with devices fabricated for use in quantum technologies and cold matter experiments amongst those produced.
594

Cooperative diversity aided direct-sequence code-division multiple-access systems

Fang, Wei January 2008 (has links)
In relay-assisted direct-sequence code-division multiple-access (DS-CDMA) systems, the distance between the relay and the destination receiver may be significantly shorter than that between the source transmitter and the destination receiver. Therefore, the transmission power of the relay may be significantly reduced in comparison to that of the source transmitter. In this thesis, we investigate the dependence of the achievable bit error ratio (BER) performance of DS-CDMA systems on the specific locations of the relays as well as on the power-sharing among the source transmitters and relays, when considering different propagation pathloss exponents. This thesis is focused on the class of repetition-based cooperation aided schemes, including both amplify-and-forward (AF) as well as decode-and-forward (DF) schemes, with an emphasis on lowcomplexity AF schemes. In our study, the signals received at the destination receiver from the source transmitters as well as from the relays are detected based on a range of diversity combining schemes having a relatively low-complexity. Specifically, the maximal ratio combining (MRC), the maximum signal-to-interference-plus-noise ratio (MSINR) and the minimum mean-square error (MMSE) principles are considered. We propose a novel cooperation aided DS-CDMA uplink scheme, where all the source mobile terminals (MTs) share a common set of relays for the sake of achieving relay diversity. As shown in our study, this low-complexity AF-based cooperation strategy is readily applicable to the challenging scenario where each source MT requires the assistance of several separate relays in order to achieve relay diversity. Another novel cooperation scheme is proposed for the downlink of DS-CDMAsystems, where the downlink multiuser interference (MUI) is suppressed with the aid of transmitter preprocessing, while maintaining the relay diversity order facilitated by the specific number of relays employed, despite using simple matched-filter (MF) based receivers. The transmitter preprocessing schemes considered include both the zero-forcing (ZF) and the MMSE-assisted arrangements, which belong to the class of linear transmitter preprocessing schemes. Furthermore, these transmitter preprocessing schemes are operated under the assumption that the base station’s transmitter employs explicit knowledge about the spreading sequences assigned to the destination MTs, but requires no knowledge about the downlink channels. Our study demonstrates that the proposed relay-assisted DS-CDMA systems using transmitter preprocessing are capable of substantially mitigating the downlink MUI, despite using low-complexity MF receivers.
595

Feature extraction via heat flow analogy

Direkoglu, Cem January 2009 (has links)
Feature extraction is an important field of image processing and computer vision. Features can be classified as low-level and high-level. Low-level features do not give shape information of the objects, where the popular low-level feature extraction techniques are edge detection, corner detection, thresholding as a point operation and optical flow estimation. On the other hand, high-level features give shape information, where the popular techniques are active contours, region growing, template matching and the Hough transform. In this thesis, we investigate the heat flow analogy, which is a physics based analogy, both for low-level and high-level feature extraction. Three different contributions to feature extraction, based on using the heat conduction analogy, are presented in this thesis. The solution of the heat conduction equation depends on properties of the material, the heat source as well as specified initial and boundary conditions. In our contributions, we consider and represent particular heat conduction problems, in the image and video domains, for feature extraction. The first contribution is moving-edge detection for motion analysis, which is a low-level feature extraction. The second contribution is shape extraction from images which is a high-level feature extraction. Finally, the third contribution is silhouette object feature extraction for recognition purpose and this can be considered as a combination of low-level and high-level feature extraction. Our evaluations and experimental results show that the heat analogy can be applied successfully both for low-level and for high-level feature extraction purposes in image processing and computer vision.
596

Classification under input uncertainty with support vector machines

Yang, Jianqiang January 2009 (has links)
Uncertainty can exist in any measurement of data describing the real world. Many machine learning approaches attempt to model any uncertainty in the form of additive noise on the target, which can be effective for simple models. However, for more complex models, and where a richer description of anisotropic uncertainty is available, these approaches can suffer. The principal focus of this thesis is the development of advanced classification approaches that can incorporate the known input uncertainties into support vector machines (SVMs), which can accommodate isotropic uncertain information in the classification. This new method is termed as uncertainty support vector classification (USVC). Kernel functions can be used as well through the derivation of a novel kernelisation formulation to generalise this proposed technique to non-linear models and the resulting optimisation problem is a second order cone program (SOCP) with a unique solution. Based on the statistical models on the input uncertainty, Bi and Zhang (2005) developed total support vector classification (TSVC), which has a similar geometric interpretation and optimisation formulation to USVC, but chooses much lower probabilities that the corresponding original inputs are going to be correctly classified by the optimal solution than USVC. Adaptive uncertainty support vector classification (AUSVC) is then developed based on the combination of TSVC and USVC, in which the probabilities of the original inputs being correctly classified are adaptively adjusted in accordance with the corresponding uncertain inputs. Inheriting the advantages from AUSVC and the minimax probability machine (MPM), minimax probability support vector classification (MPSVC) is developed to maximise the probabilities of the original inputs being correctly classified. Statistical tests are used to evaluate the experimental results of different approaches. Experiments illustrate that AUSVC and MPSVC are suitable for classifying the observed uncertain inputs and recovering the true target function respectively since the contamination is normally unknown for the learner.
597

Modelling the emergence of a basis for vocal communication between artificial agents

Worgan, Simon F. January 2010 (has links)
Understanding the human faculty for speech presents a fundamental and complex problem. We do not know how humans decode the rapid speech signal and the origins and evolution of speech remain shrouded in mystery. Speakers generate a continuous stream of sounds apparently devoid of any specifying invariant features. Despite this absence, we can effortlessly decode this stream and comprehend the utterances of others. Moreover, the form of these utterances is shared and mutually understood by a large population of speakers. In this thesis, we present a multi-agent model that simulates the emergence of a system with shared auditory features and articulatory tokens. Based upon notions of intentionality and the absence of specifying invariants, each agent produces and perceives speech, learning to control an articulatory model of the vocal tract and perceiving the resulting signal through a biologically plausible artificial auditory system. By firmly establishing each aspect of our model in current phonetic theory, we are able to make useful claims and justify our inevitable abstractions. For example, Lindblom’s theory of hyper- and hypo-articulation, where speakers seek maximum auditory distinction for minimal articulatory effort, justifies our choice of an articulatory vocal tract coupled with a direct measure of effort. By removing the abstractions of previous phonetic models we have been able to reconsider the current assumption that specifying invariants, in either the auditory or articulatory domain, must indicate the presence of auditory or articulatory symbolic tokens in the cognitive domain. Rather we consider speech perception to proceed through Gibsonian direct realism where the signal is manipulated by the speaker to enable the perception of the affordances within speech. We conclude that the speech signal is constrained by the intention of the speaker and the structure of the vocal tract and decoded through an interaction of the peripheral auditory system and complex pattern recognition of multiple acoustic cues. Far from passive ‘variance mopping’, this recognition proceeds through the constant refinement of an unbroken loop between production and perception.
598

Resource constrained signal processing algorithms and architectures

Acharyya, Amit January 2011 (has links)
No description available.
599

Low cost Si nanowire biosensors by recrystallisation technologies

Sun, Kai January 2011 (has links)
No description available.
600

Semiotic term expansion as the basis for thematic models in narrative systems

Hargood, Charlie January 2011 (has links)
Narratives are a method of communicating information that comes naturally to people and is present in much of our digital and non-digital lives. While work has been undertaken investigating the nature of plot and content within narrative systems little has been done to model subtext or themes. In this thesis a machine understandable thematic model is presented for representing themes within narrative. Each instance of this model forms a definition of a theme and how it may be deconstructed into other thematic elements and their related features. The model is based on semiotic term expansion where terms may be shown to denote motifs which in turn connote themes. An authoring method has been developed to allow for instances of the model to be created. The effectiveness of this approach is demonstrated in four experiments presented within this thesis centred around the concept of creating thematic definitions and generating thematically relevant images. The first experiment explored a semiotic term expansion method for creating thematic definitions in terms of the model and a guide to support authors in doing so. This demonstrated that, though further support for authors is needed, creating valid definitions of themes was possible using the method. The following two experiments used a system called the Thematic Montage Builder; a prototype using definitions of the model to create themed photo montages. The first of these experiments compares the ability of this system to generate montages relevant to specific titles containing themes to Flickr keyword searches while the second compares this system to a term expansion system based on co-occurrence. In both cases the TMB generates montages that are judged by participants to better represent the theme in question. In the final experiment the effect of thematic emphasis on narrative cohesion is investigated. In this experiment a set of variables for measuring narrative cohesion are identified and the impact of using themed illustrations from the TMB on short stories is measured. The illustrations reduced the thematic noise of the short stories and further analysis shows a correlation between thematic cohesion and the perceived `logical sense' and `genre cohesion' of the narratives. This work shows that better machine understandable models of narrative can benefit from an understanding of themes, and that semiotic term expansion may be used to build successful thematic models.

Page generated in 0.0235 seconds