241 |
On the relationship between hypersequent calculi and labelled sequent calculi for intermediate logics with geometric Kripke semanticsRothenberg, Robert January 2010 (has links)
In this thesis we examine the relationship between hypersequent and some types of labelled sequent calculi for a subset of intermediate logics—logics between intuitionistic (Int), and classical logics—that have geometric Kripke semantics, which we call Int∗/Geo. We introduce a novel calculus for a fragment of first-order classical logic, which we call partially-shielded formulae (or PSF for short), that is adequate for expressing the semantic validity of formulae in Int∗/Geo, and apply techniques from correspondence theory to provide translations of hypersequents, simply labelled sequents and relational sequents (simply labelled sequents with relational formulae) into PSF. Using these translations, we show that hypersequents and simply labelled sequents for calculi in Int∗/Geo share the same models. We also use these translations to justify various techniques that we introduce for translating simply labelled sequents into relational sequents and vice versa. In particular, we introduce a technique called "transitive unfolding" for translating relational sequents into simply labelled sequents (and by extension, hypersequents) which preserves linear models in Int∗/Geo. We introduce syntactic translations between hypersequent calculi and simply labelled sequent calculi. We apply these translations to a novel hypersequent framework HG3ipm∗ for some logics in Int∗/Geo to obtain a corresponding simply labelled sequent framework LG3ipm∗, and to an existing simply labelled calculus for Int from the literature to obtain a novel hypersequent calculus for Int. We introduce methods for translating a simply labelled sequent calculus into a cor- responding relational calculus, and apply these methods to LG3ipm∗ to obtain a novel relational framework RG3ipm∗ that bears similarities to existing calculi from the literature. We use transitive unfolding to translate proofs in RG3ipm∗ into proofs in LG3ipm∗ and HG3ipm∗ with the communication rule, which corresponds to the semantic restriction to linear models.
|
242 |
Load balancing of irregular parallel applications on heterogeneous computing environmentsJanjic, Vladimir January 2012 (has links)
Large-scale heterogeneous distributed computing environments (such as Computational Grids and Clouds) offer the promise of access to a vast amount of computing resources at a relatively low cost. In order to ease the application development and deployment on such complex environments, high-level parallel programming languages exist that need to be supported by sophisticated runtime systems. One of the main problems that these runtime systems need to address is dynamic load balancing that ensures that no resources in the environment are underutilised or overloaded with work. This thesis deals with the problem of obtaining good speedups for irregular applications on heterogeneous distributed computing environments. It focuses on workstealing techniques that can be used for load balancing during the execution of irregular applications. It specifically addresses two problems that arise during work-stealing: where thieves should look for work during the application execution and how victims should respond to steal attempts. In particular, we describe and implement a new Feudal Stealing algorithm and also we describe and implement new granularity-driven task selection policies in the SCALES simulator, which is a work-stealing simulator developed for this thesis. In addition, we present the comprehensive evaluation of the Feudal Stealing algorithm and the granularity-driven task selection policies using the simulations of a large class of regular and irregular parallel applications on a wide range of computing environments. We show how the Feudal Stealing algorithm and the granularity-driven task selection policies bring significant improvements in speedups of irregular applications, compared to the state-of-the-art work-stealing algorithms. Furthermore, we also present the implementation of the task selection policies in the Grid-GUM runtime system [AZ06] for Glasgow Parallel Haskell (GpH) [THLPJ98], in addition to the implementation in SCALES, and we also present the evaluation of this implementation on a large set of synthetic applications.
|
243 |
An investigation into alternative human-computer interaction in relation to ergonomics for gesture interface designChen, Tin Kai January 2009 (has links)
Recent, innovative developments in the field of gesture interfaces as input techniques have the potential to provide a basic, lower-cost, point-and-click function for graphic user interfaces (GUIs). Since these gesture interfaces are not yet widely used, indeed no tilt-based gesture interface is currently on the market, there is neither an international standard for the testing procedure nor a guideline for their ergonomic design and development. Hence, the research area demands more design case studies on a practical basis. The purpose of the research is to investigate the design factors of gesture interfaces for the point-andclick task in the desktop computer environment. The key function of gesture interfaces is to transfer the specific body movement into the cursor movement on the two-dimensional graphical user interface(2D GUI) on a real-time basis, based in particular on the arm movement. The initial literature review identified limitations related to the cursor movement behaviour with gesture interfaces. Since the cursor movement is the machine output of the gesture interfaces that need to be designed, a new accuracy measure based on the calculation of the cursor movement distance and an associated model was then proposed in order to validate the continuous cursor movement. Furthermore, a design guideline with detailed design requirements and specifications for the tilt-based gesture interfaces was suggested. In order to collect the human performance data and the cursor movement distance, a graphical measurement platform was designed and validated with the ordinary mouse. Since there are typically two types of gesture interface, i.e. the sweep-based and the tilt-based, and no commercial tilt-based gesture interface has yet been developed, a commercial sweep-based gesture interface, namely the P5 Glove, was studied and the causes and effects of the discrete cursor movement on the usability was investigated. According to the proposed design guideline, two versions of the tilt-based gesture 3 interface were designed and validated based on an iterative design process. Most of the phenomena and results from the trials undertaken, which are inter-related, were analyzed and discussed. The research has contributed new knowledge through design improvement of tilt-based gesture interfaces and the improvement of the discrete cursor movement by elimination of the manual error compensation. This research reveals that there is a relation between the cursor movement behaviour and the adjusted R 2 for the prediction of the movement time across models expanded from Fitts’ Law. In such a situation, the actual working area and the joint ranges are lengthy and appreciably different from those that had been planned. Further studies are suggested. The research was associated with the University Alliance Scheme technically supported by Freescale Semiconductor Co., U.S.
|
244 |
Mathematical approach to channel codes with a diagonal matrix structureMitchell, David G. M. January 2009 (has links)
Digital communications have now become a fundamental part of modern society. In communications, channel coding is an effective way to reduce the information rate down to channel capacity so that the information can be transmitted reliably through the channel. This thesis is devoted to studying the mathematical theory and analysis of channel codes that possess a useful diagonal structure in the parity-check and generator matrices. The first aspect of these codes that is studied is the ability to describe the parity-check matrix of a code with sliding diagonal structure using polynomials. Using this framework, an efficient new method is proposed to obtain a generator matrix G from certain types of parity-check matrices with a so-called defective cyclic block structure. By the nature of this method, G can also be completely described by a polynomial, which leads to efficient encoder design using shift registers. In addition, there is no need for the matrices to be in systematic form, thus avoiding the need for Gaussian elimination. Following this work, we proceed to explore some of the properties of diagonally structured lowdensity parity-check (LDPC) convolutional codes. LDPC convolutional codes have been shown to be capable of achieving the same capacity-approaching performance as LDPC block codes with iterative message-passing decoding. The first crucial property studied is the minimum free distance of LDPC convolutional code ensembles, an important parameter contributing to the error-correcting capability of the code. Here, asymptotic methods are used to form lower bounds on the ratio of the free distance to constraint length for several ensembles of asymptotically good, protograph-based LDPC convolutional codes. Further, it is shown that this ratio of free distance to constraint length for such LDPC convolutional codes exceeds the ratio of minimum distance to block length for corresponding LDPC block codes. Another interesting property of these codes is the way in which the structure affects the performance in the infamous error floor (which occurs at high signal to noise ratio) of the bit error rate curve. It has been suggested that “near-codewords” may be a significant factor affecting decoding failures of LDPC codes over an additive white Gaussian noise (AWGN) channel. A near-codeword is a sequence that satisfies almost all of the check equations. These nearcodewords can be associated with so-called ‘trapping sets’ that exist in the Tanner graph of a code. In the final major contribution of the thesis, trapping sets of protograph-based LDPC convolutional codes are analysed. Here, asymptotic methods are used to calculate a lower bound for the trapping set growth rates for several ensembles of asymptotically good protograph-based LDPC convolutional codes. This value can be used to predict where the error floor will occur for these codes under iterative message-passing decoding.
|
245 |
Natural selection, adaptive evolution and diversity in computational ecosystemsPichler, Peter-Paul January 2009 (has links)
The central goal of this thesis is to provide additional criteria towards implementing open-ended evolution in an artificial system. Methods inspired by biological evolution are frequently applied to generate autonomous agents too complex to design by hand. Despite substantial progress in the area of evolutionary computation, additional efforts are needed to identify a coherent set of requirements for a system capable of exhibiting open-ended evolutionary dynamics. The thesis provides an extensive discussion of existing models and of the major considerations for designing a computational model of evolution by natural selection. Thus, the work in this thesis constitutes a further step towards determining the requirements for such a system and introduces a concrete implementation of an artificial evolution system to evaluate the developed suggestions. The proposed system improves upon existing models with respect to easy interpretability of agent behaviour, high structural freedom, and a low-level sensor and effector model to allow numerous long-term evolutionary gradients. In a series of experiments, the evolutionary dynamics of the system are examined against the set objectives and, where appropriate, compared with existing systems. Typical agent behaviours are introduced to convey a general overview of the system dynamics. These behaviours are related to properties of the respective agent populations and their evolved morphologies. It is shown that an intuitive classification of observed behaviours coincides with a more formal classification based on morphology. The evolutionary dynamics of the system are evaluated and shown to be unbounded according to the classification provided by Bedau and Packard’s measures of evolutionary activity. Further, it is analysed how observed behavioural complexity relates to the complexity of the agent-side mechanisms subserving these behaviours. It is shown that for the concrete definition of complexity applied, the average complexity continually increases for extended periods of evolutionary time. In combination, these two findings show how the observed behaviours are the result of an ongoing and lasting adaptive evolutionary process as opposed to being artifacts of the seeding process. Finally, the effect of variation in the system on the diversity of evolved behaviour is investigated. It is shown that coupling individual survival and reproductive success can restrict the available evolutionary trajectories in more than the trivial sense of removing another dimension, and conversely, decoupling individual survival from reproductive success can increase the number of evolutionary trajectories. The effect of different reproductive mechanisms is contrasted with that of variation in environmental conditions. The diversity of evolved strategies turns out to be sensitive to the reproductive mechanism while being remarkably robust to the variation of environmental conditions. These findings emphasize the importance of being explicit about the abstractions and assumptions underlying an artificial evolution system, particularly if the system is intended to model aspects of biological evolution.
|
246 |
A model-independent theory of computational complexity : from patience to precision and beyondBlakey, Edward William January 2010 (has links)
The field of computational complexity theory--which chiefly aims to quantify the difficulty encountered when performing calculations--is, in the case of conventional computers, correctly practised and well understood (some important and fundamental open questions notwithstanding); however, such understanding is, we argue, lacking when unconventional paradigms are considered. As an illustration, we present here an analogue computer that performs the task of natural-number factorization using only polynomial time and space; the system's true, exponential complexity, which arises from requirements concerning precision, is overlooked by a traditional, `time-and-space' approach to complexity theory. Hence, we formulate the thesis that unconventional computers warrant unconventional complexity analysis; the crucial omission from traditional analysis, we suggest, is consideration of relevant resources, these being not only time and space, but also precision, energy, etc. In the presence of this multitude of resources, however, the task of comparing computers' efficiency (formerly a case merely of comparing time complexity) becomes difficult. We resolve this by introducing a notion of overall complexity, though this transpires to be incompatible with an unrestricted formulation of resource; accordingly, we define normality of resource, and stipulate that considered resources be normal, so as to rectify certain undesirable complexity behaviour. Our concept of overall complexity induces corresponding complexity classes, and we prove theorems concerning, for example, the inclusions therebetween. Our notions of resource, overall complexity, normality, etc. form a model-independent framework of computational complexity theory, which allows: insightful complexity analysis of unconventional computers; comparison of large, model-heterogeneous sets of computers, and correspondingly improved bounds upon the complexity of problems; assessment of novel, unconventional systems against existing, Turing-machine benchmarks; increased confidence in the difficulty of problems; etc. We apply notions of the framework to existing disputes in the literature, and consider in the context of the framework various fundamental questions concerning the nature of computation.
|
247 |
Handling cultural factors in human-computer interactionBourges-Waldegg, Paula January 1998 (has links)
The main objective of the research described in this thesis was to investigate and understand the origins of culturally-determined usability problems in the context of Human Computer Interaction (HCI) to develop a method for treating this issue, when designing systems intended to be shared by culturally-heterogeneous user groups, such as Computer Supported Co-operative Work (CSCW) systems and the Internet. The resulting approach supports HCI designers by providing an alternative to internationalisation and localisation guidelines, which are inappropriate for tackling culturally-determined usability problems in the context of shared-systems. The research also sought to apply and test the developed approach in order to assess its efficacy and to modify or improve it accordingly.
|
248 |
Probabilistic topic models for sentiment analysis on the WebChenghua, Lin January 2011 (has links)
Sentiment analysis aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text, and has received a rapid growth of interest in natural language processing in recent years. Probabilistic topic models, on the other hand, are capable of discovering hidden thematic structure in large archives of documents, and have been an active research area in the field of information retrieval. The work in this thesis focuses on developing topic models for automatic sentiment analysis of web data, by combining the ideas from both research domains. One noticeable issue of most previous work in sentiment analysis is that the trained classifier is domain dependent, and the labelled corpora required for training could be difficult to acquire in real world applications. Another issue is that the dependencies between sentiment/subjectivity and topics are not taken into consideration. The main contribution of this thesis is therefore the introduction of three probabilistic topic models, which address the above concerns by modelling sentiment/subjectivity and topic simultaneously. The first model is called the joint sentiment-topic (JST) model based on latent Dirichlet allocation (LDA), which detects sentiment and topic simultaneously from text. Unlike supervised approaches to sentiment classification which often fail to produce satisfactory performance when applied to new domains, the weakly-supervised nature of JST makes it highly portable to other domains, where the only supervision information required is a domain-independent sentiment lexicon. Apart from document-level sentiment classification results, JST can also extract sentiment-bearing topics automatically, which is a distinct feature compared to the existing sentiment analysis approaches. The second model is a dynamic version of JST called the dynamic joint sentiment-topic (dJST) model. dJST respects the ordering of documents, and allows the analysis of topic and sentiment evolution of document archives that are collected over a long time span. By accounting for the historical dependencies of documents from the past epochs in the generative process, dJST gives a richer posterior topical structure than JST, and can better respond to the permutations of topic prominence. We also derive online inference procedures based on a stochastic EM algorithm for efficiently updating the model parameters. The third model is called the subjectivity detection LDA (subjLDA) model for sentence-level subjectivity detection. Two sets of latent variables were introduced in subjLDA. One is the subjectivity label for each sentence; another is the sentiment label for each word token. By viewing the subjectivity detection problem as weakly-supervised generative model learning, subjLDA significantly outperforms the baseline and is comparable to the supervised approach which relies on much larger amounts of data for training. These models have been evaluated on real world datasets, demonstrating that joint sentiment topic modelling is indeed an important and useful research area with much to offer in the way of good results.
|
249 |
Gender differences in navigation dialogues with computer systemsKoulouri, Theodora January 2013 (has links)
Gender is among the most influential of the factors underlying differences in spatial abilities, human communication and interactions with and through computers. Past research has offered important insights into gender differences in navigation and language use. Yet, given the multidimensionality of these domains, many issues remain contentious while others unexplored. Moreover, having been derived from non-interactive, and often artificial, studies, the generalisability of this research to interactive contexts of use, particularly in the practical domain of Human-Computer Interaction (HCI), may be problematic. At the same time, little is known about how gender strategies, behaviours and preferences interact with the features of technology in various domains of HCI, including collaborative systems and systems with natural language interfaces. Targeting these knowledge gaps, the thesis aims to address the central question of how gender differences emerge and operate in spatial navigation dialogues with computer systems. To this end, an empirical study is undertaken, in which, mixed-gender and same-gender pairs communicate to complete an urban navigation task, with one of the participants being under the impression that he/she interacts with a robot. Performance and dialogue data were collected using a custom system that supported synchronous navigation and communication between the user and the robot. Based on this empirical data, the thesis describes the key role of the interaction of gender in navigation performance and communication processes, which outweighed the effect of individual gender, moderating gender differences and reversing predicted patterns of performance and language use. This thesis has produced several contributions; theoretical, methodological and practical. From a theoretical perspective, it offers novel findings in gender differences in navigation and communication. The methodological contribution concerns the successful application of dialogue as a naturalistic, and yet experimentally sound, research paradigm to study gender and spatial language. The practical contributions include concrete design guidelines for natural language systems and implications for the development of gender-neutral interfaces in specific domains of HCI.
|
250 |
Opening the Web for all : inclusive and secure design of an online authentication systemGibson, Marcia January 2012 (has links)
Effective use of the World Wide Web grants users increased power over people, time and space. However, its growing ubiquity also means these powers tend to become eroded in non-users. Growth of the Web as a marketplace and as a channel to deliver e-services, results in an ever increasing volume of sensitive information being transacted and stored online. As a result, authentication systems are now being used extensively on the Web. Unfortunately the profusion of Web sites and the large numbers of associated passwords reduces their efficacy and puts severe strain on users’ limited cognitive resources. Authentication systems themselves therefore can act as an additional source of exclusion. However, this step of authentication has up until now, been largely overlooked when considering inclusive design. People may experience a variety of barriers to Internet access: Psychological, Material, Skills and Usage. Existing models of these barriers within the literature are discussed, and a unified model of exclusion is developed and used to identify a series of potential solutions to the various aspects of each barrier. These solutions are classified into 4 separate design goals: Enhanced Usability, Enhanced Accessibility, Reduced End-user Cost and Robust Security. A number of groups who are especially at risk of Web exclusion are also identified. The design goals are used to evaluate existing traditional and image-based passwords. The accessibility component is assessed in terms of twenty-two use scenarios, consisting of a particular user group’s limiting characteristic and strategies the groups are known to use when accessing the Web. The accessibility analysis shows traditional passwords to be less accessible for several groups: • Novice users who experience reduced comparative learnability, efficiency and increased errors. • Mobile phone users, head wand users, eye gaze tracker users, those with reduced manual dexterity/and or tremors accessing principally via a mouse or keyboard, those with impaired ability to select and filter relevant sensory information and low-literacy users accessing via a normal or text to speech browsers. These groups experience reduced comparative efficiency and increased errors. • Users with impaired ability to remember information or sequences and illiterate users accessing via a text-to-speech browser or normal browser. These groups have the most significant issues with passwords, experiencing reduced comparative learnability, memorability, efficiency and increased errors. Image based passwords are found to be more accessible for some of these groups, but are unusable by blind users and less usable by those with visual impairments. Just as Web users are not a uniform, homogenous group, so too is there no homogenous solution to creating usable security. Even so, there may be solutions that are usable and secure given the particular scenario within which they will be used. For this reason, it is important to supply a number of alternatives because as one modality or model of interaction is locked out, another group becomes excluded. One such alternative, a novel scheme called “Musipass”, is trialled in lab-based and large-scale online user participation experiments. Musipass is found to offer superior long-term memorability to a traditional password and users report enjoying the experience of authenticating with music. A security analysis is conducted which shows Musipass to offer comparative or enhanced security compared to a traditional password against a number of well-known attacks.
|
Page generated in 0.0261 seconds