81 |
Temporal representations in human computer interaction : designing for the lived experience of timeBuzzo, D. January 2017 (has links)
Temporal representations in Human Computer Interaction: is a portfolio of peer reviewed papers and media artworks representing several years of investigation, experimental making, and study into the perceptions and representation of time in digital media and Human Computer Interaction (HCI). The lived experience of time is fundamental to our understanding of the world and the work investigates the complex and contradictory nature of time and the powerful effect temporal representations have on health, well-being and perception. The investigation, methodology and outputs are based in the traditions of interdisciplinary electronic arts, technology research and digital culture. Practice-led, research through design techniques, are used as a means to explore alternative framings of temporal processes. I discuss the importance of temporal experience as a concern for contextual design and consider the models and lived experiences of time and how system-centric representations alter our perceptions of time. The papers and works (discussed in chapters 4, 5 and 6 and included in full in appendix a) cover three broad areas: Analysis and discussion of practice and language in HCI related to time; Experimental designs for temporal objects and media artworks; Finally through critical artefacts, media art and publications I discuss a new perspective on temporality in design and representation and present a framework for re-assessing how we include time in interaction design. The framework is designed to be used as a tool to support understanding of the temporality of users, or situations of use, and to assist in translating and re-framing insights into practical outputs for improved design of interactive systems. It contrasts system-centric with user-centric temporal representation, and contributes a new design methodology that is aimed at improving contextual design processes. By increasing awareness of temporal context, and the multi-dimensional nature of user’s temporality, designers can better understand user context when creating truly user centred interactive experiences.
|
82 |
Automatic monitoring of physical activity related affective states for chronic pain rehabilitationOlugbade, Temitayo A. January 2018 (has links)
Chronic pain is a prevalent disorder that affects engagement in valued activities. This is a consequence of cognitive and affective barriers, particularly low self-efficacy and emotional distress (i.e. fear/anxiety and depressed mood), to physical functioning. Although clinicians intervene to reduce these barriers, their support is limited to clinical settings and its effects do not easily transfer to everyday functioning which is key to self-management for the person with pain. Analysis carried out in parallel with this thesis points to untapped opportunities for technology to support pain self-management or improved function in everyday activity settings. With this long-term goal for technology in mind, this thesis investigates the possibility of building systems that can automatically detect relevant psychological states from movement behaviour, making three main contributions. First, extension of the annotation of an existing dataset of participants with and without chronic pain performing physical exercises is used to develop a new model of chronic disabling pain where anxiety acts as mediator between pain and self-efficacy, emotional distress, and movement behaviour. Unlike previous models, which are largely theoretical and draw from broad measures of these variables, the proposed model uses event-specific data that better characterise the influence of pain and related states on engagement in physical activities. The model further shows that the relationship between these states and guarding during movement (the behaviour specified in the pain behaviour literature) is complex and behaviour descriptions of a lower level of granularity are needed for automatic classification of the states. The model also suggests that some of the states may be expressed via other movement behaviour types. Second, addressing this using the aforementioned dataset with the additional labels, and through an in-depth analysis of movement, this thesis provides an extended taxonomy of bodily cues for the automatic classification of pain, self-efficacy and emotional distress. In particular, the thesis provides understanding of novel cues of these states and deeper understanding of known cues of pain and emotional distress. Using machine learning algorithms, average F1 scores (mean across movement types) of 0.90, 0.87, and 0.86 were obtained for automatic detection of three levels of pain and self-efficacy and of two levels of emotional distress respectively, based on the bodily cues described and thus supporting the discriminative value of the proposed taxonomy. Third, based on this, the thesis acquired a new dataset of both functional and exercise movements of people with chronic pain based on low-cost wearable sensors designed for this thesis and informed by the previous studies. The modelling results of average F1 score of 0.78 for two-level detection of both pain and self-efficacy point to the possibility of automatic monitoring of these states in everyday functioning. With these contributions, the thesis provides understanding and tools necessary to advance the area of pain-related affective computing and groundbreaking insight that is critical to the understanding of chronic pain. Finally, the contributions lay the groundwork for physical rehabilitation technology to facilitate everyday functioning of people with chronic pain.
|
83 |
Axiomatic domain theory in categories of partial mapsFiore, Marcelo P. January 1994 (has links)
This thesis is an investigation into axiomatic categorical domain theory as needed for the denotational semantics of deterministic programming languages. To provide a direct semantic treatment of non-terminating computations, we make partiality the core of our theory. Thus, we focus on categories of partial maps. We study representability of partial maps and show its equivalence with classifiability. We observe that, once partiality is taken as primitive, a notion of approximation may be derived. In fact, two notions of approximation, contextual approximation and specialisation, based on testing and observing partial maps are considered and shown to coincide. Further we characterise when the approximation relation between partial maps is domain-theoretic in the (technical) sense that the category of partial maps Cpo-enriches with respect to it. Concerning the semantics of type constructors in categories of partial maps, we present a characterisation of colimits of diagrams of total maps; study order-enriched partial cartesian closure; and provide conditions to guarantee the existence of the limits needed to solve recursive type equations. Concerning the semantics of recursive types, we motivate the study of enriched algebraic compactness and make it the central concept when interpreting recursive types. We establish the fundamental property of algebraically compact categories, namely that recursive types on them admit canonical interpretations, and show that in algebraically compact categories recursive types reduce to inductive types. Special attention is paid to Cpo-algebraic compactness, leading to the identification of a 2-category of kinds with very strong closure properties. As an application of the theory developed, enriched categorical models of the metalanguage FPC (a type theory with sums, products, exponentials and recursive types) are defined and two abstract examples of models, including domain-theoretic models, are axiomatised. Further, FPC is considered as a programming language with a call-by-value operational semantics and a denotational semantics defined on top of a categorical model. Operational and denotational semantics are related via a computational soundness result. The interpretation of FPC expressions in domain-theoretic Poset-models is observed to be representation-independent. And, to culminate, a computational adequacy result for an axiomatisation of absolute non-trivial domain-theoretic models is proved.
|
84 |
Staying active despite pain : investigating feedback mechanisms to support physical activity in people with chronic musculoskeletal painSingh, A. January 2016 (has links)
Chronic (persistent) pain (CP) affects 1 in 10 adults; clinical resources are insufficient, and anxiety about activity restricts lives. Physical activity is important for improving function and quality of life in people with chronic pain, but psychological factors such as fear of increased pain and damage due to activity, lack of confidence or support, make it difficult to build and maintain physical activity towards long-term goals. There is insufficient research to guide the design of interactive technology to support people with CP in self-managing physical activity. This thesis aims to bridge this gap through five contributions: first, a detailed analysis from a plethora of qualitative studies with people with CP and physiotherapists was done to identify factors to be incorporated into technology to promote physical activity despite pain. Second, we rethink the role of technology in improving uptake of physical activity in people with CP by proposing a novel sonification framework (Go-with-the-flow) that addresses psychological and physical needs raised by our studies; through an iterative approach, we designed a wearable device to implement and evaluate the framework. In control studies conducted to evaluate the sonification strategies, people with CP reported increased performance, motivation, awareness of movement, and relaxation with sound feedback. A focus group, and a survey of CP patients conducted at the end of a hospital pain management session provided an in-depth understanding of how different aspects of the framework and device facilitate self-directed rehabilitation. Third, we understand the role of sensing technology and real-time feedback in supporting functional activity, using the Go-with-the-flow framework and wearable device; we conducted evaluations including contextual interviews, diary studies and a 7-14 days study of self-directed home-based use of the device by people with CP. Fourth, building on the understanding from all our studies and literature from other conditions where physical rehabilitation is critical, we propose a framework for designing technology for physical rehabilitation (RaFT). Fifth, we reflect on our studies with people with CP and physiotherapists and provide practical insights for HCI research in sensitive settings.
|
85 |
Communicating in a multi-role, multi-device, multi-channel world : how knowledge workers manage work-home boundariesCecchinato, Marta E. January 2018 (has links)
Technology keeps us connected through multiple devices, on several communication channels, and with our many daily roles. Being able to better manage one’s availability and thus have more control over work-home boundaries can potentially reduce interferences and ultimately stress. However, little is known about the practical implications of communication technologies and their role in boundary and availability management. Taking a bottom-up approach, we conducted four exploratory qualitative studies to understand how current communication technologies support and challenge work-home boundaries for knowledge workers. First, we compared email practices across accounts and devices, finding differences based on professional and personal preferences. Secondly, with wearables such as smartwatches becoming more popular, findings from our autoethnography and interview study show how device ecologies can be used to moderate notifications and one’s sense of availability. Thirdly, moving beyond just email to include multiple communication channels, our diary study and focus group showed how awareness and availability are managed and interpreted differently between senders and receivers. Together, these studies portray how current communication technologies challenge boundary management and how users rely on strategies – that we define as microboundaries – to mitigate boundary cross-overs, boundary interruptions, and expectations of availability. Finally, to understand the extent to which microboundaries can be useful boundary management strategies, we took a multiple-case study approach to evaluate how they are used over time and found that, although context-dependent, microboundaries help increase participants’ boundary control and reduce stress. This thesis’ primary contribution is a taxonomy of microboundary strategies that deepens our current understanding of boundary management in the digital age. By feeling in control, users experience fewer unwanted boundary cross-overs and ultimately feel less stressed. This work leads to our secondary contribution to individual and organisational practice. Finally, we draw a set of implications for the design of interactions and cross-device experiences.
|
86 |
The structure of call-by-valueFührmann, Carsten January 2000 (has links)
Understanding procedure calls is crucial in computer science and everyday programming. Among the most common strategies for passing procedure arguments ('evaluation strategies') are 'call-by-name', 'call-by-need', and 'call-by-value', where the latter is the most commonly used. While reasoning about procedure calls is simple for call-by-name, problems arise for call-by-need and call-by-value, because it matters how often and in which order the arguments of a procedure are evaluated. We shall classify these problems and see that all of them occur for call-by-value, some occur for call-by-need, and none occur for call-by-name. In that sense, call-by-value is the 'greatest common denominator' of the three evaluation strategies. Reasoning about call-by-value programs has been tackled by Eugenio Moggi's 'computational lambda-calculus', which is based on a distinction between 'values' and arbitrary expressions. However, the computational lambda-calculus deals only implicitly with the evaluation order and the number of evaluations of procedure arguments. Therefore, certain program equivalences that we should be able to spot immediately require long proofs. We shall remedy this by introducing a new calculus - the 'let-calculus' - that deals explicitly with evaluation order and the number of evaluations. For dealing with the number of evaluations, the let-calculus has mechanisms known from; linear, affine, and relevant logic. For dealing with evaluation order, it has mechanisms which seems to be completely new. We shall also introduce a new kind of denotational semantics for call-by-value programming languages. The key idea is to consider how categories with finite products are commonly used to model call-by-name languages, and remove the axioms which break for call-by-value. The resulting models we shall call 'precartesian categories'. These relatively simple structures have remarkable mathematical properties, which will inspire the design of the let-calculus. Precartesian categories will provide a semantics of both the let-calculus and the computational lambda-calculus. This semantics not only validates the same program equivalences as Moggi's monad-based semantics of the computational lambda-calculus; It is also 'direct' by contrast to Moggi's semantics, which implicitly performs a language transform. Our direct semantics has practical benefits: It clarifies issues that are related with the evaluation order and the number of evaluations of procedure arguments, and it is also very easy to remember. The thesis is rounded up by three applications of the let-calculus and precartesian categories: First, construing well-established models of partiality (i.e. categories of generalised partial functions) as precartesian categories, and specialising the let-calculus accordingly. Second, adding global state to a given computational system and construing the resulting system as a precartesian category. Third, analysing an implementation technique called 'continuation-style transform' by construing the source language of such a transform as a precartesian category.
|
87 |
A theory of program refinementDenney, Ewen W. K. C. January 1999 (has links)
We give a canonical program refinement calculus based on the lambda calculus and classical first-order predicate logic, and study its proof theory and semantics. The intention is to construct a metalanguage for refinement in which basic principles of program development can be studied. The idea is that it should be possible to induce a refinement calculus in a generic manner from a programming language and a program logic. For concreteness, we adopt the simply-typed lambda calculus augmented with primitive recursion as a paradigmatic typed functional programming language, and use classical first-order logic as a simple program logic. A key feature is the construction of the refinement calculus in a modular fashion, as the combination of two orthogonal extensions to the underlying programming language (in this case, the simply-typed lambda calculus). The crucial observation is that a refinement calculus is given by extending a programming language to allow indeterminate expressions (or 'stubs') involving the construction 'some program x such that P'. Factoring this into 'some x ...' and '... such that P', we first study extensions to the lambda calculus providing separate analyses of what we might call 'true' stubs, and structured specifications. The questions we are concerned with in these calculi are how do stubs interact with the programming language, and what is a suitable notion of structured specification for program development. The full refinement calculus is then constructed in a natural way as the combination of these two subcalculi. The claim that the subcalculi are orthogonal extensions to the lambda calculus is justified by a result that a refinement can actually be factored into simpler judgements in the subcalculi, that is, into logical reasoning and simple decomposition. The semantics for the calculi are given using Henkin models with additional structure. Both simply-typed lambda calculus and first-order logic are interpreted using Henkin models themselves. The two subcalculi require some extra structure and the full refinement calculus is modelled by Henkin models with a combination of these extra requirements. There are soundness and completeness results for each calculus, and by virtue of there being certain embeddings of models we can infer that the refinement calculus is a conservative extension of both of the subcalculi which, in turn, are conservative extensions of the lambda calculus.
|
88 |
A semantic analysis of controlLaird, James David January 1999 (has links)
This thesis examines the use of denotational semantics to reason about control flow in sequential, basically functional languages. It extends recent work in game semantics, in which programs are interpreted as strategies for computation by interaction with an environment. Abramsky has suggested that an intensional hierarchy of computational features such as state, and their fully abstract models, can be captured as violations of the constraints on strategies in the basic functional model. Non-local control flow is shown to fit into this framework as the violation of strong and weak `bracketing' conditions, related to linear behaviour. The language muPCF (Parigot's mu_lambda with constants and recursion) is adopted as a simple basis for higher-type, sequential computation with access to the flow of control. A simple operational semantics for both call-by-name and call-by-value evaluation is described. It is shown that dropping the bracketing condition on games models of PCF yields fully abstract models of muPCF. The games models of muPCF are instances of a general construction based on a continuations monad on Fam(C), where C is a rational cartesian closed category with infinite products. Computational adequacy, definability and full abstraction can then be captured by simple axioms on C. The fully abstract and universal models of muPCF are shown to have an effective presentation in the category of Berry-Curien sequential algorithms. There is further analysis of observational equivalence, in the form of a context lemma, and a characterization of the unique functor from the (initial) games model, which is an isomorphism on its (fully abstract) quotient. This establishes decidability of observational equivalence for finitary muPCF, contrasting with the undecidability of the analogous relation in pure PCF.
|
89 |
Design and integrity of deterministic system architecturesSmith, Richard Bartlett January 2007 (has links)
Architectures represented by system construction 'building block' components and interrelationships provide the structural form. This thesis addresses processes, procedures and methods that support system design synthesis and specifically the determination of the integrity of candidate architectural structures. Particular emphasis is given to the structural representation of system architectures, their consistency and functional quantification. It is a design imperative that a hierarchically decomposed structure maintains compatibility and consistency between the functional and realisation solutions. Complex systems are normally simplified by the use of hierarchical decomposition so that lower level components are precisely defined and simpler than higher-level components. To enable such systems to be reconstructed from their components, the hierarchical construction must provide vertical intra-relationship consistency, horizontal interrelationship consistency, and inter-component functional consistency. Firstly, a modified process design model is proposed that incorporates the generic structural representation of system architectures. Secondly, a system architecture design knowledge domain is proposed that enables viewpoint evaluations to be aggregated into a coherent set of domains that are both necessary and sufficient to determine the integrity of system architectures. Thirdly, four methods of structural analysis are proposed to assure the integrity of the architecture. The first enables the structural compatibility between the 'building blocks' that provide the emergent functional properties and implementation solution properties to be determined. The second enables the compatibility of the functional causality structure and the implementation causality structure to be determined. The third method provides a graphical representation of architectural structures. The fourth method uses the graphical form of structural representation to provide a technique that enables quantitative estimation of performance estimates of emergent properties for large scale or complex architectural structures. These methods have been combined into a procedure of formal design. This is a design process that, if rigorously executed, meets the requirements for reconstructability.
|
90 |
Automatic goal distribution strategies for the execution of committed choice logic languages on distributed memory parallel computersScott, R. B. January 1994 (has links)
There has been much research interest in efficient implementations of the Committed Choice Non-Deterministic (CCND) logic languages on parallel computers. To take full advantage of the speed gains of parallel computers, methods need to be found to automatically distribute goals over the machine processors, ideally with as little involvement from the user as possible. In this thesis we explore some automatic goal distribution strategies for the execution of the CCND languages on commercially available distributed memory parallel computers. There are two facets to the goal distribution strategies we have chosen to explore: demand driven An idle processor requests work from other processors. We describe two strategies in this class: one in which an idle processor asks only neighbouring processors for spare work, the <I>nearest-neighbour</I> strategy; and one where an idle processor may ask any other processor in the machine for spare work, an <I>all-processors</I> strategy. weights Using a program analysis technique devised by Tick, weights are attached to goals; the weights can be used to order the goals so that they can be executed and distributed out in weighted order, possibly increasing performance. We describe a framework in which to implement and analyse goal distribution strategies, and then go on to describe experiments with demand driven strategies, both with and without weights. The experiments were made using two of our own implementations of Flat Guarded Horn Clauses - an interpreter and a WAM-based system - executing on a MEIKO T800 Transputer Array configured in a 2-D mesh topology.
|
Page generated in 0.0588 seconds