• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 266
  • 49
  • 36
  • 30
  • 28
  • 16
  • 13
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Semantic frameworks for complexity

Gurr, Douglas J. January 1990 (has links)
This thesis extends denotational semantics to take account of the resource requirements of programs. We describe the approach we have taken in modelling the resource requirements of programs, and motivate the definition of a monoid <i>M</i> of resource values. A connection is established with Moggi's categorical semantics of computations, and this connection is exploited to study complexity as a monad constructor. A formal system, the λcom-calculus, for reasoning the resource requirements of programs is developed. Operational and denotational semantics are defined for this system, and we prove a correspondence theorem. We show that Moggi's framework is not sufficiently general to capture all the examples of interest to us. Therefore, we define a new class of models based on the idea of an external datum. We investigate the relationship between these two approaches. The new framework is used to investigate various concepts of importance in complexity theory and the analysis of algorithms. In particular, we show how to capture the notions of input measures, upper bounds on complexity and non-exact complexity.
22

Can deep-sub-micron device noise be used as the basis for probabilistic neural computation?

Hamid, Nor Hisham January 2006 (has links)
This thesis explores the potential of probabilistic neural architectures as a computational paradigm for future nanoscale MOSFETs. The performance of the Continuous Restricted Boltzmann Machine (CRBM) implemented with simulated device noise in both Random Telegraph Signal (RTS) and “flicker”, or 1/f form is studied. A methodology for time domain RTS based noise analysis is described, based upon the known physics of future nanoscale MOSFETs. This noise is injected into a circuit implementation of the synaptic analogue multiplier and subsequently included in a behavioural model of a stochastic CRBM. The performance of the resultant “deep-sub-micron CRBM” compares well with that of the perfect Gaussian CRBM. Through simulation experiments, the CRBM with nanoscale MOSFET noise shows the ability to reconstruct training data with only minor performance degradation. These results do not prove that nanoscale MOSFET noise can be exploited in all contexts, and with all data, for probabilistic computation. The conclusion is, however, that nanoscale MOSFET noise is potentially usable in probabilistic neural computation. This thesis thus introduces a methodology for a form of technology-downstreaming and highlights the potential of probabilistic architecture for computation with future nanoscale MOSFETs.
23

Resource provision in object oriented distributed systems

Proctor, Stephen W. January 1989 (has links)
Using objects to structure distributed systems is becoming an increasingly popular paradigm. This thesis examines some of the fundamental problems associated with resource provision in such systems. A conceptual framework for the work is created by the development of a reference model for object oriented distributed systems. Within this framework, several aspects of resource provision are examined in detail. In each case, an object oriented solution is sought rather than applying existing, process based solutions. The problem of object construction in a heterogeneous environment is addressed, leading to the development of a distributed transformation algorithm for the automatic construction of object representations. A novel scheduling mechanism is developed based upon statistical hypothesis testing. Two applications of this mechanism are simulated in detail: the assignment of invocation messages to object instances, and the suppression of redundant status update messages. The concept of 'virtual properties' is introduced, leading to the development of virtual templates as a re-usable mechanism for endowing objects with properties such as resilience and persistence. The separate resource provision issues addressed are then drawn together to demonstrate how the techniques developed can be used to satisfy users' resource requirements.
24

An extended calculus of constructions

Luo, Zhaohui January 1990 (has links)
This thesis presents and studies a unifying theory of dependent types ECC - Extended Calculus of Constructions. ECC integrates Coquand-Huet's (impredicative) calculus of constructions and Martin-Lof's (predicative) type theory with universes, and turns out to be a strong and expressive calculus for formalization of mathematics, structured proof development and program specification. The meta-theory of ECC is studied and we show that the calculus has good meta-theoretic properties. The main proof-theoretic result is the <i>strong normalization</i> theorem, proved by using Girard-Tait's reducibility method based on a <i>quasi normalization</i> theorem which makes explicit the predicativity of the predicative universes. The strong normalization results shows the proof-theoretic consistency of the calculus; in particular, it implies the consistency of the embedded intuitionistic higher-order logic and the decidability of the theory. The meta-theoretic results establish the theoretical foundations both for pragmatic applications in theorem-proving and program specification and for computer implementations of the theory. ECC has been implemented in the proof development system LEGO developed by Pollack. In ECC, dependent Σ-types are non-propositional types residing in the predicative universes and propositions are lifted as higher-level types as well. This solves the known difficulty that adding strong Σ-types to an impredicative system results in logical paradox and enables Σ-types to be used to express the intuitionistic notion of subsets. Σ-types together with type universes hence provide useful abstraction and module mechanisms for abstract description of mathematical theories and basic mechanisms for program specification and adequate formalization of abstract mathematics (<i>e.g.</i>, abstract algebras and notions in category theory). A notion of (abstract) mathematical theory can be described and leads to a promising approach to <i>abstract reasoning</i> and <i>structured reasoning</i>. Program specifications can be expressed by Σ-types, using propositions in the embedded logic to describe program properties (for example, by an equality reflection result, computational equality can be modelled by the propositional Leibniz's equality definable in the theory). These developments allow comprehensive structuring of formal or rigorous development of proofs and programs. Also discussed is how the calculus can be understood set-theoretically. We explain an ω-Set (realizability) model of the theory. In particular, propositions can be interpreted as partial equivalence relations and the predicative type universes as corresponding to large set universes.
25

Is game immersion just another form of selective attention? : an empirical investigation of real world dissociation in computer game immersion

Jennett, Charlene Ianthe January 2010 (has links)
When your daughter is playing video-games and you call her to dinner but she fails to respond, do you assume she heard you and ignored you? Everyday descriptions of game immersion suggest that the real world dissociation experienced by gamers could be an extreme form of selective attention. If this were the case, this would mean that your daughter really did not hear you call, due to the complexity of the game environment and a lack of available cognitive resources. This thesis describes a grounded theory that suggests that immersion is a result of self-motivated attention which is enhanced through feedback from the game. Five experimental studies are then described. The experimental studies show that the extent to which a player thinks they are doing well in the game significantly affects their level of immersion, as measured via the Immersive Experience Questionnaire; and has objective effects on their awareness of other things in the environment, namely recall of auditory distracters and reaction time to a visual distracter. Together the evidence suggests that immersion cannot be accounted for solely by selective attention: much of the real world is attenuated during game-play due primarily to the gamer’s motivation to continue the immersive experience. Interestingly, the auditory items that do get through the attenuation filter and are heard by the gamer are those that are personal in some way; so if you used your daughter’s name when you called her, and she did not respond, then based on our findings one might suggest that she chose to ignore you in order to keep her sense of immersion. Additionally, the final experiment shows a dissociation between immersion and cognitive load. This suggests that the differences in immersion were not a result of increased sensory features or task demands, but purely due to motivation.
26

Grounding affect recognition on a low-level description of body posture

Kleinsmith, A. L. January 2010 (has links)
The research presented in this thesis is centred in the rapidly growing field of affective computing and focuses on the automatic recognition of affect. Numerous diverse technologies have become part of working and social life, hence it is crucial to understand whether recognising the affective state of the user may be added to increase the technologies' effectiveness. The contributions made are the investigation of a low-level description of body posture, the proposal of a method for creating benchmarks for evaluating affective posture recognition models, and providing an understanding of how posture is used to communicate affect. Using a low-level posture description approach, this research aims to create automatic recognition models that may be easily adapted to different application contexts. These recognition models would be able to map low-level descriptions of postural configurations into discrete affective states and levels of affective dimensions. The feasibility of this approach is tested using an incremental procedure with three studies. The first study (acted postures), investigates the feasibility of recognising basic emotions and affective dimensions from acted, i.e., stereotypical, exaggerated expressions. The second study (non-acted postures), aims at recognising subtle affective states and affective dimensions from non-acted body postures in the context of a video game. In both studies, the results showed above chance level agreement and reliable consistency between human observers for the discrete affective states and valence and arousal dimensions. A feature analysis showed that specific low-level features are predictive of affect. The automatic recognition models achieved recognition rates similar to or better than the benchmarks computed. Extending the non-acted postures study, the third study focuses on how the affective posture recognition system performs when applied to sequences of non-acted static postures that have not been manually preselected, as if in a runtime situation. An automatic modelling technique was combined with a decision rule defined in this research. The results indicate that posture sequences can be recognised at well above chance level.
27

Type systems for modular programs and specifications

Aspinall, David R. January 1997 (has links)
This thesis studies the foundations of formal program development. It brings together aspects of algebraic specification and type theory and applies them to powerful new mechanisms for modular programming and specification. The language ASL+ is the vehicle for the study. It is a typed A-calculus built on top of a core-level programming language and the algebraic specification language ASL. The A-calculus expresses the structure of programs and specifications in the same language, allowing higher-order parameterisation of each, and allowing specification of parameterised programs. ASL+ has a model-theoretic semantics based on an arbitrary institution, and two formal systems: a type-checking system to check well-formedness, and a proof system to prove that a program satisfies a specification or that one specification refines another. The type-checking system builds on simply typed A-calculus. The proof system is richer: a type theory with subtyping, dependent products, power types, and singleton types. This is a novel combination; establishing even basic properties poses a challenge. I demonstrate the use of ASL+ with an example program development, introducing some rules and issues such as sharing. The formal study begins with two fundamental investigations into sub-languages of ASL+, new typed A-calculi in their own right. The first calculus A<{} features singleton types, the second calculus A Power features power types. Both calculi mix subtyping with type-dependency. I prove that each is strongly normalizing, and has expected admissible rules; for example, A<{} has subject reduction and minimal typing. The calculus A Power is given a restricted system for rough type-checking which is decidable. Rough types help organize a model definition. I examine two versions of ASL+ itself. The first is an abstract kernel language which treats the underlying core-level languages as sets of combinators. It is built on a calculus AASL+ which extends A<{} and A PowerS Practical examples must be translated into this version of ASL+, because it does not automatically express the sharing behaviour of parameterised programs. Instead of a translation, I give a second version of ASL+ in a specific institution FTC. The institution is based on FPC, a functional language with recursive types, together with an LCF-style extension to higher-order logic. This concrete version of ASL+ has a more powerful type-checking system than the abstract version, so programs and specifications can be written directly without translation.
28

Efficient algorithms for listing combinatorial structures

Goldberg, Leslie Ann January 1991 (has links)
This thesis studies the problem of designing efficient algorithms for listing combinatorial structures. The main notion of efficiency that we use is due to Johnson, Yannakakis, and Papadimitriou. It is called <i>polynomial delay</i>. A listing algorithm is said to have <i>delay d</i> if and only if it satisfies the following conditions whenever it is run with any input <i>p</i>: 1. It executes at most <i>d(p</i>) machine instructions before either producing the first output or halting. 2. After any output it executes at most <i>d(p</i>) machine instructions before either producing the next output or halting. An algorithm is said to have <i>polynomial delay</i> if its delay is bounded from above by a polynomial in the length of the input. In the thesis we also define a weaker notion of efficiency which we call <i>cumulative polynomial delay</i>. There are some families of combinatorial structures for which it is easy to design a polynomial delay listing algorithm. For example, it is easy to design a polynomial delay algorithm that takes as input a unary integer <i>n</i> and lists all <i>n</i>-vertex graphs. In this thesis we focus on more difficult problems such as the following. Problem 1 - Listing unlabeled graphs - Design a polynomial delay algorithm that takes as input a unary integer <i>n</i> and lists exactly one representative from each isomorphism class in the set of <i>n</i>-vertex graphs. Problem 2 - Listing Hamiltonian graphs - Design a polynomial delay algorithm that takes as input a unary integer <i>n</i> and lists all Hamiltonian <i>n</i>-vertex graphs. We start the thesis by developing general methods for solving listing problems such as 1 and 2. Then we apply the methods to specific combinatorial families obtaining various listing algorithms including the following. 1. A polynomial space polynomial delay listing algorithm for unlabeled graphs 2. A polynomial space polynomial delay listing algorithm for any first order one property* 3. A polynomial delay listing algorithm for Hamiltonian graphs 4. A polynomial space polynomial delay listing algorithm for graphs with cliques of specified sizes 5. A polynomial space cumulative polynomial delay listing algorithm for <i>k</i>-colorable graphs. * A first order graph property is called a <i>one property</i> if and only if it is the case that almost every graph has the property.
29

Generating program animators from programming language semantics

Berry, Dave January 1990 (has links)
I present a theory of program animation based on formal semantics. This theory shows how an animator for a language can be generated from a formal specification of that language. Such an animator presents a model of evaluation that is formally correct with respect to the semantics. The theory also provides a framework for comparing definitions of animation operations. The main part of the theory is the definition of an evaluation step. I compare two definitions. The first is based on the transitions used in the transitional style of structured operational semantics, and is motivated by the idea that these transitions represent an intuitive idea of a computation step. Unfortunately this definition produces unsatisfactory animations. However, it can be augmented to give one that better satisfies the needs of the user. Both of these definitions are given in the relational style of structured operational semantics. The first definition is based on an equivalence between the relational and transitional styles; I give a definition of this equivalence. I also discuss the relation between the definition of a step and the choice of semantic formalism. Views of a program in mid-evaluation can be defined by extending the specification of the programming language to include semantic display rules. Each semantic display rule specifies the display of one sub-phrase of the program inmid-evaluation. This approach is powerful enough to define a wide range of views. I also show how the definition of a step can be parameterised on a view. More advanced operations can also be defined in terms of this theory. These operations and the views mentioned in the previous paragraph cover most of the features found in existing animators. This indicates that the theory is powerful enough to specify useful systems. The main feature that is not yet provided is the ability to define views that are specific to a single program. These ideas have been implemented in a system called The Animator Generator. Animators produced by The Animator Generator support multiple views and the advanced operations mentioned here. I give a brief description of this system. I finish by discussing how this work could be developed further.
30

On hereditary Harrop formulae as a basis for logic programming

Harland, James January 1991 (has links)
This thesis examines the use of first-order hereditary Harrop formulae, a generalisation of Horn clauses due to Miller, as a foundation for logic programming. As this framework is constructive, this will sometimes dictate an approach which differs slightly from the traditional (classical) one. We discuss the foundational problems involved in adding negation to the framework of first-order hereditary Harrop formulae, including the role of the Negation as Failure (NAF) rule and the Closed World Assumption (CWA) in a constructive setting, and introduce the notion of a completely defined predicate. This notion may be used to define a notion of NAF for a more general class of goals than literals. We also discuss the possibilities for forms of negation other than NAF, and explore the relationships between NAF and more explicit forms. Clark's completion of a program is often used in this context, and we show how a more explicit version of the completion may be given in hereditary Harrop formulae. We may think of the completion as specifying a theory in which an atom <i>A</i> fails if <i>A</i> hskip 0.5cm, and hence is an explicit axiomatisation of failure, which in our case is more computationally meaningful than Clark's completion. The problem of finding answer substitutions for existentially quantified negated goals requires more powerful techniques than unification alone, and so we give an algorithm which is suitable for this purpose, and show how it may be incorporated into the goal reduction process. A constructive framework necessitates a different approach to model theory, and we give a Kripke-like model for the extended class of programs for which negation is implemented by the Negation as Failure rule. This is based on the model theory developed by Miller for hereditary Harrop formulae. No restriction on the class of programs is used, which requires some departures from the usual <i>T</i><SUP>w</SUP> process, but the spirit of the construction remains the same.

Page generated in 0.0247 seconds