191 |
Why does soft matter? : exploring the design space of soft robotic materials and programmable machinesWinters, Amy January 2017 (has links)
This practice-led research examines how the emerging role of the ‘material designer’ can enrich the design process in Human Computer Interaction. It advocates embodiment as a design methodology by employing tacit knowledge; focusing on a subjective, affective and visceral engagement with computational materials. This theoretical premise is explored by drawing on the fields of soft robotics, as well as transitive and programmable materials. With the advancement and democratisation of physical computing and digital fabrication, it is now possible for designers to process, or even invent and composite new programmable materials, merging both their physical and digital capabilities. This study questions how the notion of soft can develop a distinct space for the design of novel user interfaces. This premise is applied through a phenomenological understanding of technology development—as opposed to generating data which is solely reliant on observable and measurable evidence. Bio-engineered technologies such as electroactive polymer, pneumatic and hydraulic actuator systems are deployed to explore a new type of responsive, sensual and organic materiality. Here, traditional medical diagnostic applications such as microfluidics are transferred into the experimental contexts of textiles and wearable technology. Therefore, by thinking through physical prototyping, a bodily engagement with materials and the interpretation of the elements of water, air and steam; a designer can create a fertile ground for a polyvalent imagination. Together, this methodology is used as a qualitative system for collecting and evaluating data on the significance of design-led thinking in soft robotic materials. This research concludes that there are insights to be gained from the creative practice and exploratory methods of material-led thinking in HCI that can contribute to the commercial research and development fields of wearable technology. Outputs include a prototype box of ‘Invention Tools’ for textile designers and the identification and creation of the role 04 of embodied making in relation to the imagination. Further, soft composite hybrids, incorporating elastomers, have potential applications in colour, texture and shape changing surfaces. Thus, this thesis argues that it is within the creative soft sciences that the next advancements in soft robotics may emerge.
|
192 |
Modelling uncertainty in touch interactionWeir, Daryl January 2014 (has links)
Touch interaction is an increasingly ubiquitous input modality on modern devices. It appears on devices including phones, tablets, smartwatches and even some recent laptops. Despite its popularity, touch as an input technology suffers from a high level of measurement uncertainty. This stems from issues such as the ‘fat finger problem’, where the soft pad of the finger creates an ambiguous contact region with the screen that must be approximated by a single touch point. In addition to these physical uncertainties, there are issues of uncertainty of intent when the user is unsure of the goal of a touch. Perhaps the most common example is when typing a word, the user may be unsure of the spelling leading to touches on the wrong keys. The uncertainty of touch leads to an offset between the user’s intended target and the touch position recorded by the device. While numerous models have been proposed to model and correct for these offsets, existing techniques in general have assumed that the offset is a deterministic function of the input. We observe that this is not the case — touch also exhibits a random component. We propose in this dissertation that this property makes touch an excellent target for analysis using probabilistic techniques from machine learning. These techniques allow us to quantify the uncertainty expressed by a given touch, and the core assertion of our work is that this allows useful improvements to touch interaction to be obtained. We show this through a number of studies. In Chapter 4, we apply Gaussian Process regression to the touch offset problem, producing models which allow very accurate selection of small targets. In the process, we observe that offsets are both highly non-linear and highly user-specific. In Chapter 5, we make use of the predictive uncertainty of the GP model when applied to a soft keyboard — this allows us to obtain key press probabilities which we combine with a language model to perform autocorrection. In Chapter 6, we introduce an extension to this framework in which users are given direct control over the level of uncertainty they express. We show that not only can users control such a system succesfully, they can use it to improve their performance when typing words not known to the language model. Finally, in Chapter 7 we show that users’ touch behaviour is significantly different across different tasks, particularly for typing compared to pointing tasks. We use this to motivate an investigation of the use of a sparse regression algorithm, the Relevance Vector Machine, to train offset models using small amounts of data.
|
193 |
A framework for Adaptive Capability ProfilingBell, Matthew J. January 2015 (has links)
This thesis documents research providing improvements in the field of accessibility modelling, which will be of particular interest as computing becomes increasingly ubiquitous. It is argued that a new approach is required that takes into account the dynamic relationship between users, their technology (both hardware and software) and any additional Assistive Technologies (ATs) that may be required. In addition, the approach must find a balance between fidelity and transportability. A theoretical framework has been developed that is able to represent both users and technology in symmetrical (hierarchical) recursive profiles, using a vocabulary that moves from device-specific to device-agnostic capabilities. The research has resulted in the development of a single unified solution that is able to functionally assess the accessibility of interactions through the use of pattern matching between graph-based profiles. A self-efficacy study was also conducted, which identified the inability of older people to provide the data necessary to drive a system based on the framework. Subsequently, the ethical considerations surrounding the use of automated data collection agents were discussed and a mechanism for representing contextual information was also included. Finally, real user data was collected and processed using a practically implemented prototype to provide an evaluation of the approach. The thesis represents a contribution through its ability to both: (1) accommodate the collection of data from a wide variety of sources, and (2) support accessibility assessments at varying levels of abstraction in order to identify if/where assistance may be necessary. The resulting approach has contributed to a work-package of the Sus-IT project, under the New Dynamics of Ageing (NDA) programme of research in the UK. It has also been presented to a W3C Research and Development Working Group symposium on User Modelling for Accessibility (UM4A). Finally, dissemination has been taken forward through its inclusion as an invited paper presented during a subsequent parallel session within the 8th International Conference on Universal Access in Human-Computer Interaction.
|
194 |
Creating illusion in computer aided performanceMarshall, Joe January 2009 (has links)
This thesis studies the creation of illusion in computer aided performance. Illusion is created here by using deceptions, and a design framework is presented which suggests several different deception strategies which may be useful. The framework has been developed in an iterative process in tandem with the development of 3 real world performances which were used to explore deception strategies. The first case study presents a system for augmenting juggling performance. The techniques that were developed to control this system demonstrate how deception may become useful even when the core of the performance is not deceptive in any way. This is followed by a magic performance called the Cup Game, which was designed to explicitly test the strategies of deception described in the framework. The final case study is an interactive art installation which presents the illusion of a pet rock that lives in a cage. This demonstrates the usefulness of suspension of disbelief in the creation of illusions. It also demonstrates interesting social effects that are used to strengthen this suspension of disbelief. The idea of creating the impression of a false situation is inspired particularly by previous HCI work on public interaction. This work demonstrated the usefulness of hiding interface use or computer outputs from some people in a situation. The creation of deliberately ambiguous computer interfaces, which allow for a wider variety of interpretations to be made by the user has also been described. The work here goes beyond these techniques to use technology to actively create false impressions. The techniques used in this process are guided by the work of magic performers, and by psychological studies of how magic performance works. As well as artistic performance, it is envisaged that this work may prove applicable to more traditional situations. In addition to the framework itself, the development of the case studies has created several useful algorithms which have wider applications. The case studies are also useful guides for those creating performance systems, or other systems where deceptive techniques may be useful.
|
195 |
ERES methodology and approximate algebraic computationsChristou, D. January 2011 (has links)
The area of approximate algebraic computations is a fast growing area in modern computer algebra which has attracted many researchers in recent years. Amongst the various algebraic computations, the computation of the Greatest Common Divisor (GCD) and the Least Common Multiple (LCM) of a set of polynomials are challenging problems that arise from several applications in applied mathematics and engineering. Several methods have been proposed for the computation of the GCD of polynomials using tools and notions either from linear algebra or linear systems theory. Amongst these, a matrix-based method which relies on the properties of the GCD as an invariant of the original set of polynomials under elementary row transformations and shifting elements in the rows of a matrix, shows interesting properties in relation to the problem of the GCD of sets of many polynomials. These transformations are referred to as Extended-Row-Equivalence and Shifting (ERES) operations and their iterative application to a basis matrix, which is formed directly from the coefficients of the given polynomials, formulates the ERES method for the computation of the GCD of polynomials and establishes the basic principles of the ERES methodology. The main objective of the present thesis concerns the improvement of the ERES methodology and its use for the efficient computation of the GCD and LCM of sets of several univariate polynomials with parameter uncertainty, as well as the extension of its application to other related algebraic problems. New theoretical and numerical properties of the ERES method are defined in this thesis by introducing the matrix representation of the Shifting operation, which is used to change the position of the elements in the rows of a matrix. This important theoretical result opens the way for a new algebraic representation of the GCD of a set polynomials, the remainder, and the quotient of Euclid's division for two polynomials based on ERES operations. The principles of the ERES methodology provide the means to develop numerical algorithms for the GCD and LCM of polynomials that inherently have the potential to efficiently work with sets of several polynomials with inexactly known coefficients. The present new implementation of the ERES method, referred to as the ``Hybrid ERES Algorithm", is based on the effective combination of symbolic-numeric arithmetic (hybrid arithmetic) and shows interesting computational properties concerning the approximate GCD and LCM problems. The evaluation of the quality, or ``strength", of an approximate GCD is equivalent to an evaluation of a distance problem in a projective space and it is thus reduced to an optimisation problem. An efficient implementation of an algorithm computing the strength bounds is introduced here by exploiting some of the special aspects of the respective distance problem. Furthermore, a new ERES-based method has been developed for the approximate LCM which involves a least-squares minimisation process, applied to a matrix which is formed from the remainders of Euclid's division by ERES operations. The residual from the least-squares process characterises the quality of the obtained approximate LCM. Finally, the developed framework of the ERES methodology is also applied to the representation of continued fractions to improve the stability criterion for linear systems based on the Routh-Hurwitz test.
|
196 |
Document ranking with quantum probabilitiesZuccon, Guido January 2012 (has links)
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i.e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i.e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i.e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i.e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include (1) investigating estimations and approximations of quantum interference in qPRP, (2) exploiting complex numbers for the representation of documents and queries, and (3) applying the concepts underlying qPRP to tasks other than document ranking.
|
197 |
Principles and applications of algorithmic problem solvingFerreira, Joao Fernando Peixoto January 2011 (has links)
Algorithmic problem solving provides a radically new way of approaching and solving problems in general by using the advances that have been made in the basic principles of correct-by-construction algorithm design. The aim of this thesis is to provide educational material that shows how these advances can be used to support the teaching of mathematics and computing. We rewrite material on elementary number theory and we show how the focus on the algorithmic content of the theory allows the systematisation of existing proofs and, more importantly, the construction of new knowledge in a practical and elegant way. For example, based on Euclid’s algorithm, we derive a new and efficient algorithm to enumerate the positive rational numbers in two different ways, and we develop a new and constructive proof of the two-squares theorem. Because the teaching of any subject can only be effective if the teacher has access to abundant and sufficiently varied educational material, we also include a catalogue of teaching scenarios. Teaching scenarios are fully worked out solutions to algorithmic problems together with detailed guidelines on the principles captured by the problem, how the problem is tackled, and how it is solved. Most of the scenarios have a recreational flavour and are designed to promote self-discovery by the students. Based on the material developed, we are convinced that goal-oriented, calculational algorithmic skills can be used to enrich and reinvigorate the teaching of mathematics and computing.
|
198 |
Case for holistic query evaluationKrikellas, Konstantinos January 2010 (has links)
In this thesis we present the holistic query evaluation model. We propose a novel query engine design that exploits the characteristics of modern processors when queries execute inside main memory. The holistic model (a) is based on template-based code generation for each executed query, (b) uses multithreading to adapt to multicore processor architectures and (c) addresses the optimization problem of scheduling multiple threads for intra-query parallelism. Main-memory query execution is a usual operation in modern database servers equipped with tens or hundreds of gigabytes of RAM. In such an execution environment, the query engine needs to adapt to the CPU characteristics to boost performance. For this purpose, holistic query evaluation applies customized code generation to database query evaluation. The idea is to use a collection of highly efficient code templates and dynamically instantiate them to create query- and hardware-specific source code. The source code is compiled and dynamically linked to the database server for processing. Code generation diminishes the bloat of higher-level programming abstractions necessary for implementing generic, interpreted, SQL query engines. At the same time, the generated code is customized for the hardware it will run on. The holistic model supports the most frequently used query processing algorithms, namely sorting, partitioning, join evaluation, and aggregation, thus allowing the efficient evaluation of complex DSS or OLAP queries. Modern CPUs follow multicore designs with multiple threads running in parallel. The dataflow of query engine algorithms needs to be adapted to exploit such designs. We identify memory accesses and thread synchronization as the main bottlenecks in a multicore execution environment. We extend the holistic query evaluation model and propose techniques to mitigate the impact of these bottlenecks on multithreaded query evaluation. We analytically model the expected performance and scalability of the proposed algorithms according to the hardware specifications. The analytical performance expressions can be used by the optimizer to statically estimate the speedup of multithreaded query execution. Finally, we examine the problem of thread scheduling in the context of multithreaded query evaluation on multicore CPUs. The search space for possible operator execution schedules scales fast, thus forbidding the use of exhaustive techniques. We model intra-query parallelism on multicore systems and present scheduling heuristics that result in different degrees of schedule quality and optimization cost. We identify cases where each of our proposed algorithms, or combinations of them, are expected to generate schedules of high quality at an acceptable running cost.
|
199 |
Pure subtype systems : a type theory for extensible softwareHutchins, DeLesley January 2009 (has links)
This thesis presents a novel approach to type theory called “pure subtype systems”, and a core calculus called DEEP which is based on that approach. DEEP is capable of modeling a number of interesting language techniques that have been proposed in the literature, including mixin modules, virtual classes, feature-oriented programming, and partial evaluation. The design of DEEP was motivated by two well-known problems: “the expression problem”, and “the tag elimination problem.” The expression problem is concerned with the design of an interpreter that is extensible, and requires an advanced module system. The tag elimination problem is concerned with the design of an interpreter that is efficient, and requires an advanced partial evaluator. We present a solution in DEEP that solves both problems simultaneously, which has never been done before. These two problems serve as an “acid test” for advanced type theories, because they make heavy demands on the static type system. Our solution in DEEP makes use of the following capabilities. (1) Virtual types are type definitions within a module that can be extended by clients of the module. (2) Type definitions may be mutually recursive. (3) Higher-order subtyping and bounded quantification are used to represent partial information about types. (4) Dependent types and singleton types provide increased type precision. The combination of recursive types, virtual types, dependent types, higher-order subtyping, and bounded quantification is highly non-trivial. We introduce “pure subtype systems” as a way of managing this complexity. Pure subtype systems eliminate the distinction between types and objects; every term can behave as either a type or an object depending on context. A subtype relation is defined over all terms, and subtyping, rather than typing, forms the basis of the theory. We show that higher-order subtyping is strong enough to completely subsume the traditional type relation, and we provide practical algorithms for type checking and for finding minimal types. The cost of using pure subtype systems lies in the complexity of the meta-theory. Unfortunately, we are unable to establish some basic meta-theoretic properties, such as type safety and transitivity elimination, although we have made some progress towards these goals. We formulate the subtype relation as an abstract reduction system, and we show that the type theory is sound if the reduction system is confluent. We can prove that reductions are locally confluent, but a proof of global confluence remains elusive. In summary, pure subtype systems represent a new and interesting approach to type theory. This thesis describes the basic properties of pure subtype systems, and provides concrete examples of how they can be applied. The Deep calculus demonstrates that our approach has a number of real-world practical applications in areas that have proved to be quite difficult for traditional type theories to handle. However, the ultimate soundness of the technique remains an open question.
|
200 |
Practical structured parallelism using BMFCrooke, David January 1998 (has links)
This thesis concerns the use of the Bird- Meertens Formalism as a mechanism to control parallelism in an imperative programming language. One of the main reasons for the failure of parallelism to enter mainstream computing is the difficulty of developing software and the lack of the portability and performance predictability enjoyed by sequential systems. A key objetive should be to minimise costs by abstracting much of the complexity away from the programmer. Criteria for a suitable parallel programming paradigm to meet this goal are defined. The Bird-Meertens Formalism, which has in the past been shown to be a suitable vehicle for expressing parallel algorithms, is used as the basis for a proposed imperative parallel programming paradigm which meets these criteria. A programming language is proposed which is an example of this paradigm, based on the BMF Theory of Lists and the sequential language C. A concurrent operational semantics is outlined, with the emphasis on its use as a practical tool for imcreasing confidence in program correctness, rather than on full and rigorous formality. A prototype implementation of a subset of this language for a distributed memory, massively parallel computer is produced in the form of a C subroutine library. Although not offering realistic absolute performance, it permits measurements of scalability and relative performance to be undertaken. A case study is undertaken which implements a simple but realistic algoritm in the language, and considers how well the the criteria outlined at the start of the project are met. The prototype library implementation is used for performance measurements. A range of further possibilities is examinedm, in particular ways in which the paradigm language may be extended, and the possibility of using alternative BMF-like type theories. Pragmatic considerations for achieving performance in a production implementation are discussed.
|
Page generated in 0.0511 seconds