• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 266
  • 49
  • 36
  • 30
  • 28
  • 16
  • 13
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Developing a calculus for relating computer programs

Harrison, M. D. January 1978 (has links)
No description available.
172

Applications of coherent phonons in quantum optics : prospects for optically controlled ensemble quantum memories in the solid state

Waldermann, Felix C. January 2009 (has links)
Quantum information processing (QIP) encompasses powerful methods of communication, cryptography and computation. For a successful deployment of quantum optics to QIP, quantum memories are necessary, i.e. devices that coherently store single photons. We here studied an off-resonant (Raman) type of quantum memories, and its possible realisation using solid state absorbers. A single photon is adiabatically absorbed by an ensemble of .absorbers with A-level structure, in a process mediated by a strong classical control field. Coherent retrieval is achieved by a second control pulse. The storage efficiency has been .calculated as a function of the ensemble optical depth, the absorber dipole moments, and the control field power. This scheme can be realised with a variety of quantum absorbers, e.g. atoms in a Brownian gas or semiconductor quantum dots. Absorbers in the solid state are advantageous because of their temporal stability, large dipole moments, and high densities. Two absorber types in the solid state have been analysed: the nitrogen vacancy (NV) centre in diamond, and delocalised optical phonon modes in Raman active solids. A review of the optical properties of the NV centre has been conducted, paying special attention to the attributes required for a quantum memory. A new, broadband quantum memory scheme has been introduced, using phonon sidebands for storage. An experimental analysis of a high energy ion irradiated, nitrogen-rich diamond sample created by high pressure and high temperature (HPHT) has been conducted to determine possible defect densities achievable. In order to achieve high fidelity storage, the photon-ens.emble interaction strength can be enhanced by a microcavity. Various types of microcavities have been created in single crystal diamond, and analysed using scanning electron microscopy (SEM) and micro-photoluminescence. Bulk phonon modes in diamond have also been investigated to asses their suitability for quantum memories and entanglement. Optical phonons can coherently be excited using Raman scattering with ultrafast laser pulses. This excitation type could be used for storage of broadband photons, if the optical retrieval (readout) of a phonon is possible. To analyse the storage lifetime, a novel technique for the decoherence measurement of optical phonons has been developed, using spectral interference of Stokes light. It allows to determine the lifetime of coherent phonons at the quantum level, i.e. in the regime relevant for quantum memories. We demonstrated this scheme with phonons in diamond. \Ve also present an experiment realising the detection of single optical phonons, which has been achieved using Anti-Stokes scattering. This allows for an ultrafast population· readout of optical phonon modes at the Brillouin zone centre.
173

Architectures for quantum computation under restricted control

Fitzsimons, Joseph Francis January 2007 (has links)
In this thesis, we introduce several new, architectures for quantum computing under restricted control. We consider two specific cases of restricted control; systems in which local control of individual qubits is impossible, and systems in which qubits are connected by a probabilistic entangling mechanism. First we will- consider densely packed spin chains and introduce a novel scheme for performing universal quantum computing within this system using only global control. Global control avoids local manipulation of individual qubits, using instead identical operations on each qubit. We present an experimental demonstration of this scheme using NMR techniques. Next we propose an architecture for implementing fault-tolerant computation using only global control. We will consider a chain consisting of repeating patterns of three distinct species. The structure of this chain allows error correction to be performed in parallel. We describe the necessary operations required to construct a universal set of fault-tolerant operations, and prove the existence of a fault-tolerance threshold. vVe finish our discussion of global control with a look at the ultimate limits of control within quantum systems. We describe a technique for calculating an upper bound on the number of accessible qubits within any quantum system, and derive upper bounds on the number of usable qubits in a range of spin networks. Next we will propose an architecture for distributed quantum computing using linear optics to generate entanglement between matter qubits. Using the graph state formalism we show that such entanglement can be generated in a deterministic manner when linking systems consisting of coupled pairs of qubits. vVe finish our discussion of distributed quantum computing by presenting an algorithm for reducing the number of qubits required to implement a graph state calculation. This algorithm efficiently removes redundant qubits from the measurement pattcrn used to perform the computation.
174

A data flow model of parallel processing

Hankin, C. L. January 1979 (has links)
No description available.
175

A study of objects

Hankin, Paul Derek January 2001 (has links)
We study theoretical aspects of the object-oriented programming methodology. We develop tools for specifying and reasoning about programming languages. The main contributions of the thesis are as follows. We define an abstract machine and compiler, based on the ZAM of Leroy, for executing impσλ programs, an object-oriented language invented by Abadi and Cardelli. We demonstrate that it is possible to 'unload' states of the machine back into source configurations, and we use this technology to prove that the compilation strategy is correct. We define an imperative, concurrent object calculus, concσ, extending the impσ, calculus of Abadi and Cardelli. We use a chemical semantics, and thereby avoid the need for the notions of configuration, store and labelled transition semantics. We demonstrate how one may extend concσ, with synchronization primitives. We give examples of concσ programs, including an encoding of the π-calculus. We present a more conventional, structural operational semantics for our calculus, and prove that the chemical and structural semantics coincide for well-formed terms. We demonstrate that it is possible to transfer type systems from Abadi and Cardelli's book, "A Theory of Objects", to our calculus. In addition we provide a type-system which guarantees that programs are single-threaded. We study may-testing equivalence for concσ. We develop a proof tool for proving programs equivalent: a context lemma. We use the lemma to prove some equational laws, and prove that an encoding of impσ within concσ is sound.
176

Multiversal algebra

Lobo, Francisco January 2013 (has links)
This thesis discusses two ideas, multiversal algebra and algebraic enrichment, and one potential application for the latter, sequential scheduling. Multiversal algebra is a proposal for the reconsideration of semigroupoid and category theory within a framework that extends the approach of universal algebra. The idea is to introduce the notion of algebraic operation relative to a given binary relation, as an alternative to the notion of operation on a carrier class. It is shown that for a particular class of relations the derived notion of category coincides with that of standard category theory. Algebraic enrichment is the name given to a series of similar constructions translating between external and internal algebraic structure, which are studied as a first step towards generalizing the seminal results of Eckmann and Hilton, and for the application to sequential scheduling. This well-known combinatorial engine of game semantics is shown to form part of a double semigroupoid, and this new algebraic perspective on scheduling offers a new direction for the study of game models and their innocence condition.
177

An algebraic semantics of Prolog control

Ross, Brian James January 1992 (has links)
The coneptual distinction between logic and control is an important tenet of logic programing. In practice, however, logic program languages use control strategies which profoundly affect the computational behavior of programs. For example, sequential Prolog's depth-first-left-first control is an unfair strategy under which nontermination can easily arise if programs are ill-structured. Formal analyses of logic programs therefore require an explicit formalisation of the control scheme. To this ends, this research introduces an algebraic proccess semantics of sequential logic programs written in Milner's calculus of Communicating Systems (CCS). the main contribution of this semantics is that the control component of a logic programming language is conciesly modelled. Goals and clauses of logic programs correspond semantically to sequential AND and OR agents respectively, and these agents are suitably defined to reflect the control strategy used to traverse the AND/OR computation tree for the program. The main difference between this and other process semantics which model concurrency is that the processes used here are sequential. The primary control strategy studied is standard Prolog's left-first-depth-first control. CCS is descriptively robust, however, and a variety of other sequential control schemes are modelled, including breadth-first, predicate freezing, and nondeterministic strategies. The CCS semantics for a particular control scheme is typically defined hierarchically. For example, standard Prolog control is initially defined in basic CCS using two control operators which model goal backtracking and clause sequencing. Using these basic definitions, higher-level bisimilarities are derived, ehich are more closely mappable to Prolog program constructs. By using variuos algebraic properties of the control operators, as well as the stream domain and theory of observational equivalence from CCS, a programming calculus approach to logic program analysis is permitted. Some example applications using the semantics include proving program termination, verifying transformations which use cut, and characterising some control issues of partial evaluation. Since progress algebras have already been used to model concurrency, this thesis suggests that they are an ideal means for unifying the operational semantics of the sequential and concurrent paradigms of logic programming.
178

Synthetic voice design and implementation

Cowley, Christopher K. January 1999 (has links)
The limitations of speech output technology emphasise the need for exploratory psychological research to maximise the effectiveness of speech as a display medium in human-computer interaction. Stage 1 of this study reviewed speech implementation research, focusing on general issues for tasks, users and environments. An analysis of design issues was conducted, related to the differing methodologies for synthesised and digitised message production. A selection of ergonomic guidelines were developed to enhance effective speech interface design. Stage 2 addressed the negative reactions of users to synthetic speech in spite of elegant dialogue structure and appropriate functional assignment. Synthetic speech interfaces have been consistently rejected by their users in a wide variety of application domains because of their poor quality. Indeed the literature repeatedly emphasises quality as being the most important contributor to implementation acceptance. In order to investigate this, a converging operations approach was adopted. This consisted of a series of five experiments (and associated pilot studies) which homed in on the specific characteristics of synthetic speech that determine the listeners varying perceptions of its qualities, and how these might be manipulated to improve its aesthetics. A flexible and reliable ratings interface was designed to display DECtalk speech variations and record listeners perceptions. In experiment one, 40 participants used this to evaluate synthetic speech variations on a wide range of perceptual scales. Factor analysis revealed two main factors: "listenability" accounting for 44.7% of the variance and correlating with the DECtalk "smoothness" parameter to . 57 (p<0.005) and "richness" to . 53 (p<0.005); "assurance" accounting for 12.6% of the variance and correlating with "average pitch" to . 42 (p<0.005) and "head size" to. 42 (p<0.005). Complimentary experiments were then required in order to address appropriate voice design for enhanced listenability and assurance perceptions. With a standard male voice set, 20 participants rated enhanced smoothness and attenuated richness as contributing significantly to speech listenability (p<0.001). Experiment three using a female voice set yielded comparable results, suggesting that further refinements of the technique were necessary in order to develop an effective methodology for speech quality optimization. At this stage it became essential to focus directly on the parameter modifications that are associated with the the aesthetically pleasing characteristics of synthetic speech. If a reliable technique could be developed to enhance perceived speech quality, then synthesis systems based on the commonly used DECtalk model might assume some of their considerable yet unfulfilled potential. In experiment four, 20 subjects rated a wide range of voices modified across the two main parameters associated with perceived listenability, smoothness and richness. The results clearly revealed a linear relationship between enhanced smoothness and attenuated richness and significant improvements in perceived listenability (p<0.001 in both cases). Planned comparisons conducted were between the different levels of the parameters and revealed significant listenability enhancements as smoothness was increased, and a similar pattern as richness decreased. Statistical analysis also revealed a significant interaction between the two parameters (p<0.001) and a more comprehensive picture was constructed. In order to expand the focus of and enhance the generality of the research, it was now necessary to assess the effects of synthetic speech modifications whilst subjects were undertaking a more realistic task. Passively rating the voices independent of processing for meaning is arguably an artificial task which rarely, if ever, would occur in 'real-world' settings. In order to investigate perceived quality in a more realistic task scenario, experiment five introduced two levels of information processing load. The purpose of this experiment was firstly to see if a comprehension load modified the pattern of listenability enhancements, and secondly to see if that pattern differed between high and and low load. Techniques for introducing cognitive load were investigated and comprehension load was selected as the most appropriate method in this case. A pilot study distinguished two levels of comprehension load from a set of 150 true/false sentences and these were recorded across the full range of parameter modifications. Twenty subjects then rated the voices using the established listenability scales as before but also performing the additional task of processing each spoken stimuli for meaning and determining the authenticity of the statements. Results indicated that listenability enhancements did indeed occur at both levels of processing although at the higher level variations in the pattern occured. A significant difference was revealed between optimal parameter modifications for conditions of high and low cognitive load (p<0.05). The results showed that subjects perceived the synthetic voices in the high cognitive load condition to be significantly less listenable than those same voices in the low cognitive load condition. The analysis also revealed that this effect was independent of the number of errors made. This result may be of general value because conclusions drawn from this findings are independent of any particular parameter modifications that may be exclusively available to DECtalk users. Overall, the study presents a detailed analysis of the research domain combined with a systematic experimental program of synthetic speech quality assessment. The experiments reported establish a reliable and replicable procedure for optimising the aesthetically pleasing characteristics of DECtalk speech, but the implications of the research extend beyond the boundaries of a particular synthesiser. Results from the experimental program lead to a number of conclusions, the most salient being that not only does the synthetic speech designer have to overcome the general rejection of synthetic voices based on their poor quality by sophisticated customisation of synthetic voice parameters, but that he or she needs to take into account the cognitive load of the task being undertaken. The interaction between cognitive load and optimal settings for synthesis requires direct consideration if synthetic speech systems are going to realise and maximise their potential in human computer interaction.
179

Evaluating the impact of variation in automatically generated embodied object descriptions

Foster, Mary Ellen January 2007 (has links)
The primary task for any system that aims to automatically generate human-readable output is choice: the input to the system is usually well-specified, but there can be a wide range of options for creating a presentation based on that input. When designing such a system, an important decision is to select which aspects of the output are hard-wired and which allow for dynamic variation. Supporting dynamic choice requires additional representation and processing effort in the system, so it is important to ensure that incorporating variation has a positive effect on the generated output. In this thesis, we concentrate on two types of output generated by a multimodal dialogue system: linguistic descriptions of objects drawn from a database, and conversational facial displays of an embodied talking head. In a series of experiments, we add different types of variation to one of these types of output. The impact of each implementation is then assessed through a user evaluation in which human judges compare outputs generated by the basic version of the system to those generated by the modified version; in some cases, we also use automated metrics to compare the versions of the generated output. This series of implementations and evaluations allows us to address three related issues. First, we explore the circumstances under which users perceive and appreciate variation in generated output. Second, we compare two methods of including variation into the output of a corpus-based generation system. Third, we compare human judgements of output quality to the predictions of a range of automated metrics. The results of the thesis are as follows. The judges generally preferred output that incorporated variation, except for a small number of cases where other aspects of the output obscured it or the variation was not marked. In general, the output of systems that chose the majority option was judged worse than that of systems that chose from a wider range of outputs. However, the results for non-verbal displays were mixed: users mildly preferred agent outputs where the facial displays were generated using stochastic techniques to those where a simple rule was used, but the stochastic facial displays decreased users’ ability to identify contextual tailoring in speech while the rule-based displays did not. Finally, automated metrics based on simple corpus similarity favour generation strategies that do not diverge far from the average corpus examples, which are exactly the strategies that human judges tend to dislike. Automated metrics that measure other properties of the generated output correspond more closely to users’ preferences.
180

Evidence based design of heuristics : usability and computer assisted assessment

Sim, Gavin January 2009 (has links)
The research reported here examines the usability of Computer Assisted Assessment(CAA) and the development of domain specific heuristics. CAA is being adopted within educational institutions and the pedagogical implications are widely investigated, but little research has been conducted into the usability of CAA applications. The thesis is: severe usability problems exist in GAA applications causing unacceptable consequences, and that using an evidence based design approach GAA heuristics can be devised The thesis reports a series of evaluations that show severe usability problems do occur in three CAA applications. The process of creating domain specific heuristics is analysed, critiqued and a novel evidence based design approach for the design of domain specific heuristics is proposed. Gathering evidence from evaluations and the literature, a set of heuristics for CAA are presented. There are four main contributions to knowledge in the thesis: the heuristics; the corpus of usability problems; the Damage Index for prioritising usability problems from multiple evaluations and the evidence based design approach to synthesise heuristics. The focus of the research evolves with the first objective being to determine If severe usability problems exist that can cause users d?ffIculties and dissatisfaction with unacceptable consequences whitct using existing commercial CAA software applications? Using a survey methodology, students' report a level of satisfaction but due to low inter-group consistency surveys are judged to be ineffective at eliciting usability problems. Alternative methods are analysed and the heuristic evaluation method is judged to be suitable. A study is designed to evaluate Nielsen's heuristic set within the CAA domain and they are deemed to be ineffective based on the formula proposed by Hanson et al. (2003). Domain specific heuristics are therefore necessary and further studies are designed to build a corpus of usability problems to facilitate the evidence based design approach to synthesise a set of heuristics, in order to aggregate the corpus and prioritise the severity of the problems a Damage Index formula is devised. The work concludes with a discussion of the heuristic design methodology and potential for future work; this includes the application of the CAA heuristics and applying the heuristic design methodology to other specific domains.

Page generated in 0.0341 seconds