• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 53
  • 45
  • 26
  • 7
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 497
  • 209
  • 208
  • 208
  • 208
  • 208
  • 77
  • 77
  • 57
  • 52
  • 49
  • 42
  • 40
  • 39
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Models of type theory with strict equality

Capriotti, Paolo January 2017 (has links)
This thesis introduces the idea of two-level type theory, an extension of Martin-Löf type theory that adds a notion of strict equality as an internal primitive. A type theory with a strict equality alongside the more conventional form of equality, the latter being of fundamental importance for the recent innovation of homotopy type theory (HoTT), was first proposed by Voevodsky, and is usually referred to as HTS. Here, we generalise and expand this idea, by developing a semantic framework that gives a systematic account of type formers for two-level systems, and proving a conservativity result relating back to a conventional type theory like HoTT. Finally, we show how a two-level theory can be used to provide partial solutions to open problems in HoTT. In particular, we use it to construct semi-simplicial types, and lay out the foundations of an internal theory of (∞, 1)-categories.
182

Understanding serendipity and its application in the context of information science and technology

Zhou, Xiaosong January 2018 (has links)
Serendipity is widely experienced in current society, especially in the digital world. According to the Oxford Concise English Dictionary, the term “serendipity” is defined as “the occurrence and development of events by chance in a happy or beneficial way”. This PhD research project aims to understand serendipity in the context of information research, and then attempts to design information technologies which can support the encountering of serendipity in cyberspace. The whole PhD project is organised with two parts. The first part investigates the nature of serendipity by conducting three user studies. After a systematic literature review on existing empirical studies of serendipity, the author finds there are research methodological problems in current studies; for example, the most widely used methods are those conventional ones like interview or survey, and it is mainly the subjective data that can be collected from participants. The author then conducted the first user study, which was an expert interview, where nine experts in the research area of serendipity were interviewed with a focus on the research methodological issues. This study successfully helped the author to gain a broader understanding of the advantages and disadvantages of employing different research methods in studying serendipity. Then the second user study, which was a diary-based study, was performed among a group of Chinese scholars with the aim to have a further investigation on the role of “context” played in the process of serendipity. The study lasted two weeks and successfully collected 62 serendipitous cases from 16 participants. The outcome of this study helped us with a better understanding of how these Chinese scholars experience serendipity, and a context-based research model was constructed, where the role of external context, social context and internal context were identified in detail during the process of serendipity. One interesting finding from the second user study is that emotions played a role in these participants’ experiencing serendipity, which was a part largely ignored by current serendipity researchers; therefore, the author conducted the third user study with the main objective to find out the impact of emotions during serendipitous encountering. This study first employed electrodermal activity (EDA) device to test participants’ psychological signals during the process of serendipity, which was implemented through a self-developed algorithm and the algorithm was embedded through a “Wizard of Oz” approach in a sketch game. The results of the study show that participants are more possible to experience serendipity under the influence of positive emotions and/or with skin conductance responses (SCRs). The second part of the PhD project is the application of serendipity through recommendation technology. A recommender system is an important area that practises serendipity in the digital world, as users in today’s society are no longer satisfied with “accurate” recommendations, and they aim to be recommended with the information that is more serendipitous and interesting to them. However, a review of existing studies on serendipitous recommendation, I have found that the inspiring achievements of understanding the nature of serendipity from information science failed to gain attention by researchers in the area of recommender systems. I then developed a new serendipitous recommendation algorithm by adopting the theory of serendipity from information research and implemented the algorithm in a real data set. The algorithm was implemented in Movielens, which involves 138,493 users with about 20,000,263 ratings across 27,278 movies. The evaluation of the algorithm was conducted in a sub-dataset, which consists of 855,598 ratings from 2,113 users on 10,197 movies. The developed algorithm was compared with another two widely used collaborative filtering algorithms (user-based collaborative filtering and item-based collaborative filtering), and the results demonstrated the developed algorithm is more effective in recommending “unexpected” and “serendipitous” movies to users. A post user study on twelve movie scholars showed that these participants were possible to experience serendipity when they were recommended with movies under the developed algorithm; and compared to user-based collaborative filtering, these participants were more willing to follow the recommended use by the serendipitous algorithm.
183

Flexible autonomy and context in human-agent collectives

Dybalova, Daniela January 2017 (has links)
Human-agent collectives (HACs) are collaborative relationships between humans and software agents that are formed to meet the individual and collective goals of their members. In general, different members of a HAC should have differing degrees of autonomy in determining how a goal is to be achieved, and the degree of autonomy that should be enjoyed by each member of the collective varies with context. This thesis explores how norms can be used to achieve context sensitive flexible autonomy in HACs. Norms can be viewed as defining standards of ideal behaviour. In the form of rules and codes, they are widely used to coordinate and regulate activity in human organisations, and more recently they have also been proposed as a coordination mechanism for multi-agent systems (MAS). Norms therefore have the potential to form a common framework for coordination and control in HACs. The thesis develops a novel framework in which group and individual norms are used to specify both the goal to be achieved by a HAC and the degree of autonomy of the HAC and/or of its members in achieving a goal. The framework allows members of a collective to create norms specifying how a goal should (or should not) be achieved, together with sanctions for non-compliance. These norms form part of the decision making context of both the humans and agents in the collective. A prototype implementation of the framework was evaluated using the Colored Trails test-bed in a scenario involving mixed human-agent teams. The experiments confirmed that norms can be used for coordination of HACs and to facilitate context related flexible autonomy.
184

Experiential manufacturing : designing meaningful relationships between people, data and things

Selby, Mark January 2017 (has links)
This thesis presents a practice-led research investigation into ways of designing more experiential and evocative interactions with data that relates to our experiences whereby less explicit, more intrinsic and aesthetic relationships are made between people, objects and data. I argue that the utilitarian values and instrumental approach behind the design of most systems that mediate our personal autobiographical data, while important, are not appropriate for more emotional forms of remembering. Therefore, systems are needed that cater specifically to modes of remembering such as reminiscence and reflection. By learning from our material encounters with memory, there are rich opportunities for design to uncover the latent values that might exist in biographical data. To articulate the design rationale of the thesis, I describe two existing design projects: the Digital Slide Viewer and Photobox. These provide some design principles that offer guidance in making memory data physical so as to encourage meaningful material practices, and ways that interactions might be designed to promote reflection. After exploratory interviews to gather insight into the ways people associate meaning with objects a set of designed provocations were produced. The Poker Chip sought to understand the ways that the material form of an object connects to its meaning, while The Bowl investigates how the actions we might use to make these meaningful objects might in themselves be meaningful. The final designed provocation takes ideas from its predecessors, and puts them into practice with a data driven system. By responding to live data from real earthquakes, the Earthquake Shelf creates a tangible rendition that, by damaging objects, leaves behind material evidence of a remote event. During a long-term field deployment, connection between the objects on the shelf and the participant’s memories proved illusive, but the shelf itself provided a viscerally real connection to a past experience. The outcome of this thesis then is to articulate Experiential Manufacturing; a position on the design of technologies intended to mediate more emotional forms of memory, such that they can create more compelling relations between data, people, and things. It does this by first opening and exploring a design space based on alternative values for designing technologies of reminiscence that mediate our life experiences. By prioritizing the aesthetic elements of the experiences, rather than focusing on the data that describes it, this thesis explores the potential of material, liveness and slowness to create systems that mediate our experience data in more evocative and emotionally valuable ways. It then presents this position as a set of thematic values, or Strong Concepts at the heart of Experiential Manufacturing.
185

The modular compilation of effects

Day, Laurence E. January 2017 (has links)
The introduction of new features to a programming language often requires that its compiler goes to the effort of ensuring they are introduced in a manner that does not interfere with the existing code base. Engineers frequently find themselves changing code that has already been designed, implemented and (ideally) proved correct, which is bad practice from a software engineering point of view. This thesis addresses the issue of constructing a compiler for a source language that is modular in the computational features that it supports. Utilising a minimal language that allows us to demonstrate the underlying techniques, we go on to introduce a significant range of effectful features in a modular manner, showing that their syntax can be compiled independently, and that source languages containing multiple features can be compiled by making use of a fold. In the event that new features necessitate changes in the underlying representation of either the source language or that of the compiler, we show that our framework is capable of incorporating these changes with minimal disruption. Finally, we show how the framework we have developed can be used to define both modular evaluators and modular virtual machines.
186

An ethnographic study of crowdwork via Amazon Mechanical Turk in India

Gupta, Neha January 2017 (has links)
With the growth of ubiquitous computing, it is becoming increasingly easy to carry out work from anywhere, using a simple computing device that can connect you to the internet. Governments, policy makers, not-for-profit and scientific organizations have been reaching out to members of the general public - citizens, popularly known as the ‘crowd’, to get their ideas, opinions and expertise on various matters. This phenomenon of using the expertise of the ‘crowd’ for different purposes is called ‘crowdsourcing’. For sometime now businesses have been looking for new ways of saving money, beyond outsourcing, for their organizations; and have thus started reaching out to the crowd, through various platforms online to get access to a cheap, mobile workforce that is presumably available round the clock. Employing the crowd comes with massive benefits for such organisations that choose to use them: the crowd-workers serve as contractors or daily workers, who do not receive standard employee benefits such as holiday pay and insurance, as well as, pay for their own primary resources – the internet, computer, infrastructural and subsistence costs. There is also no current legislation that provides guidelines regarding such type of work, although there are quite a few researchers and advocacy groups now trying to change this. For the workers, crowdsourcing provides opportunities to make money, get exposure towards developing skills, learning to work and see a world outside their own all thanks to growth in tele-communication technologies and unstable employment patterns around the world. And although there is a lot of discourse surrounding crowdsourcing and crowdwork, particularly due to the legal aspect of such work, not much is understood about the work and the workers. Questions about this workforce remain unanswered such as: why does the crowd choose to do this type of work, how do they find these crowdsourcing platforms and crowdsourced jobs, what do they look for in the jobs they pick, how they organized their activities (both work and non-work), what tools and technologies they used, what might their concerns be as workers, how do they relate to requesters and the work platform? This thesis aims to provide insights into the work of crowdwork, what entails ‘doing crowdwork’, from the perspective of the workers who partake in crowdsourced work through online platforms. The thesis presents insights from an ethnographic study conducted in India through the summer of 2013, of crowdworkers, with a particular focus on Amazon Mechanical Turk (AMT) as the principle site for work. The naturalistic data was collected from virtual and in-person interviews as well as observations of crowdworkers in their places of work and dwelling, and analysed with an ethnomethodological orientation to data, to uncover the local methods of the workers in their own words, to provide more information about this understudied cohort. Learning about crowdwork and the workers is important because this type of work has potential from an organizational perspective; a variety of relatively low-skilled work such as data entry processing, tagging, information verification, transcription and translation are being (and could be) crowdsourced by medium and large organisations. Hence this thesis makes contributions to the fields of human computation, CSCW, HCI and crowdsourcing by bringing forth insights into ‘doing crowdwork’ and ‘being a crowdworker’, which might help parties interested in using, applying or designing for crowdsourced work and crowdsourcing platforms, as well as, researchers and designers interested in this field. The contributions of the thesis include: • Uncovering the heterogeneity in the motives of turkers: what motivates workers to work on platforms like AMT, and why they choose to continue their engagement with such work and platforms. • The features of the crowdsourcing platform: what made a platform attractive to the turkers? For instance, features such as ease of use and flexibility in choosing work, played an important role in crowdsourcing. • The social nature of work: although crowdwork is highly individualized and atomic, the nature of work itself was very social. Most workers found that they needed help for one thing or the other and found online resources such as forums and Facebook groups to get support or information regarding work and personal life. • Invisible work and constant contingency management undertaken by the crowdworkers: Workers had to find and do work while managing contingencies that were created due to the opaque nature of the platform studied, AMT; requiring them to seek help externally, e.g by means of browser plug-ins, to help them work around this opacity, at the same time, creating more unpaid work for them.
187

Reinforcement learning hyper-heuristics for optimisation

Alanazi, Fawaz January 2017 (has links)
Hyper-heuristics are search algorithms which operate on a set of heuristics with the goal of solving a wide range of optimisation problems. It has been observed that different heuristics perform differently between different optimisation problems. A hyper-heuristic combines a set of predefined heuristics, and applies a machine learning technique to predict which heuristic is the most suitable to apply at a given point in time while solving a given problem. A variety of machine learning techniques have been proposed in the literature. Most of the existing machine learning techniques are reinforcement learning mechanisms interacting with the search environment with the goal of adapting the selection of heuristics during the search process. The literature on the theoretical foundation of reinforcement learning hyper-heuristics is almost nonexisting. This work provides theoretical analyses of reinforcement learning hyper-heuristics. The goal is to shed light on the learning capabilities and limitations of reinforcement learning hyper-heuristics. This improves our understanding of these hyper-heuristics, and aid the design of better reinforcement learning hyper-heuristics. It is revealed that the commonly used additive reinforcement learning mechanism, under a mild assumption, chooses asymptotically heuristics uniformly at random. This thesis also proposes the problem of identifying the most suitable heuristic with a given error probability. We show a general lower bound on the time that "every" reinforcement learning hyper-heuristic needs to identify the most suitable heuristic with a given error probability. The results reveal a general limitation to learning achieved by this computational approach. Following our theoretical analysis, different reusable and easyto-implement reinforcement learning hyper-heuristics are proposed in this thesis. The proposed hyper-heuristics are evaluated on well-known combinatorial optimisation problems. One of the proposed reinforcement learning hyper-heuristics outperformed a state-of-the-art algorithm on several benchmark problems of the well-known CHeSC 2011.
188

Investigation of the effect of insertional mutation in agr and sigB loci in Clostridium difficile R20291

Montfort-Gardeazabal, Jorge M. January 2017 (has links)
Clostridium difficile has become the major cause of healthcare acquired diarrhoea in the world. The disease caused by this pathogen is largely mediated by the production of toxins. The C. difficile genome contains an incomplete accessory gene regulator (agr) locus, designated agr1. In Staphylococcus aureus, it has been shown that the agr locus regulates virulence factors through Quorum Sensing (QS). In addition to the incomplete agr1 locus found in all of the C. difficile strains sequenced to date, the strains belonging to the BI/NAP1/027 group possess an additional agr operon, designated agr2. These strains have been found to produce higher toxin levels and a more severe disease in patients. The studies described in this thesis have shown that the two agr systems present in the Clostridium difficile BI/NAP1/027 strain R20291 are involved in the regulation of spore formation and toxin production. The agr1 locus (agrB1, agrD1) is responsible for the production of the autoinducing peptide (AIP) AgrD1, while the second agr2 locus (agrC, agrA, agrB2, agrD2) mediates production of the AgrD2. Initial mutational analysis using a University of Nottingham isolate (R20291 NM) suggested that AgrD1 both positively regulates sporulation and negatively regulates toxin production, whereas AgrD2 positively regulates toxin production, but negatively regulates spore formation. It was subsequently discovered that strain R20291 NM exhibited significantly different phenotypes to R20291 BW. The former possessed a single polar flagellum, whereas R20291 BW was peritrichously flagellated. The NM strain exhibited impaired motility and increased biofilm formation, demonstrated different growth rates, produced greater quantities of toxins and exhibited a relative delay in the onset of sporulation compared to R20291 BW. The equivalent agr mutants made in R20291 BW indicated that the regulatory control exerted by AgrD1 in sporulation was broadly the same, while AgrD2 does not seem to play any role in the regulation of spores, contrasting with observations made in the agrB2 mutant created in R20291 NM. Surprisingly, the effects of the agr mutants made in R20291 BW on toxin production were opposite to those observed in the NM strain. The subtle differences in the behaviour of the two R20291 isolates was most likely due to the presence of mutations in R20291 NM revealed by whole genome sequencing. Of the four genes affected, a mutation in the anti-sigma factor RsbW, was considered the most likely culprit. An investigation of its possible role in toxin production and sporulation was therefore undertaken through complete ClosTron-mediated inactivation of rbsW in both R20291 isolates and the creation of a sigB ClosTron mutant in R20291 BW. The sporulation and toxin production phenotypes of the rbsW mutants of the two strains mirrored that of their respective agr2 mutants. Thus, inactivation of rbsW brought forward the onset of sporulation in R20291 NM, but had no effect on the initiation of sporulation in R20291 BW, while toxin production in R20291 NM was reduced, but increased in R20291 BW. Motility and biofilm formation in the rbsW mutants of both strains was unaffected. These data suggest that interference with the RbsW/SigB interaction preferentially affects AgrD2-mediated regulatory processes. In contrast, however, inactivation of SigB in R20291 BW caused a delay in the initiation of sporulation, but did not affect toxin production. Mutation of sigB, however, did not affect motility or biofilm formation, although, in common with other bacteria, the resistance of the R20291 BW sigB mutant to oxidative stress was reduced. The picture that emerges is of a complex regulatory interrelationship between the two agr QS systems and SigB in this important nosocomial pathogen, a relationship that has been subtly subverted in strain R20291 NM. These findings emphasise the importance of knowing the genome sequence of strains under investigation if valid conclusions are to be drawn.
189

A software engineering approach for agent-based modelling and simulation of public goods games

Vu, Tuong Manh January 2017 (has links)
In Agent-based Modelling and Simulation (ABMS), a system is modelled as a collection of agents, which are autonomous decision-making units with diverse characteristics. The interaction of the individual behaviours of the agents results in the global behaviour of the system. Because ABMS offers a methodology to create an artificial society in which actors with their behaviour can be designed and results of their interaction can be observed, it has gained attention in social sciences such as Economics, Ecology, Social Psychology, and Sociology. In Economics, ABMS has been used to model many strategic situations. One of the popular strategic situations is the Public Goods Game (PGG). In the PGG, participants secretly choose how many of their private money units to put into a public pot. Social scientists can conduct laboratory experiments of PGGs to study human behaviours in strategic situations. Research findings from these laboratory studies have inspired studies using computational agents and vice versa. However, there is a lack of guidelines regarding the detailed development process and the modelling of agent behaviour for agent-based models of PGGs. We believe that this has contributed to ABMS of PGG not having been used to its full potential. This thesis aims to leverage the potential of ABMS of PGG, focusing on the development methodology of ABMS and the modelling of agent behaviour. We construct a development framework with incorporated software engineering techniques, then tailored it to ABMS of PGG. The framework uses the Unified Modelling Language (UML) as a standard specification language, and includes a simulation development lifecycle, a step-by-step development guideline, and a short guide for modelling agent behaviour with statecharts. It utilizes software engineering methods to provide a structured approach to identify agent interactions, and design simulation architecture and agent behaviour. The framework is named ABOOMS (Agent-Based Object-Oriented Modelling and Simulation). After applying the ABOOMS framework to three case studies, the framework demonstrates flexibility in development with two different modelling principles (Keep-It-Simple-Stupid vs. Keep-It-Descriptive-Stupid), capability in supporting complex psychological mechanisms, and ability to model dynamic behaviours in both discrete and continuous time. Additionally, the thesis developed an agent-based model of a PGG in a continuous-time setting. To the best of our knowledge such agent-based models do not exist. During the development, a new social preference, Generous Conditional Cooperators, was introduced to better explain the behavioural dynamics in continuous-time PGG. Experimentation with the agent-based model generated dynamics that are not presented in discrete-time setting. Thus, it is important to study both discrete and continuous time PGG, with laboratory experiment and ABMS. Our new framework allows to do the latter in a structured way. With the ABOOMS framework, economists can develop PGG simulation models in a structured way and communicate them with a formal model specification. The thesis also showed that there is a need for further investigation on behaviours in continuous-time PGG. For future works, the framework can be tested with variations of PGG or other related strategic interactions.
190

Runtime analysis of evolutionary algorithms with complex fitness evaluation mechanisms

Corus, Dogan January 2018 (has links)
Evolutionary algorithms (EAs) are bio-inspired general purpose optimisation methods which are applicable to a wide range of problems. The performance of an EA can vary considerably according to the problem it tackles. Runtime analyses of EAs rigorously prove bounds on the expected computational resources required by the EA to solve a given problem. A crucial component of an EA is the way it evaluates the quality (i.e. fitness) of candidate solutions. Different fitness evaluation methods may drastically change the efficiency of a given EA. In this thesis, the effects of different fitness evaluation methods on the performance of evolutionary algorithms are investigated. A major contribution of this thesis is the first runtime analyses of EAs on bi-level optimisation problems. The performances of different EAs on The Generalised Minimum Spanning Tree Problem and The Generalised Travelling Salesperson Problem are analysed to illustrate how bi-level problem structures can be exploited to delegate part of the optimisation effort to problem-specific deterministic algorithms. Different bi-level representations are considered and it is proved that one of them leads to fixed-parameter evolutionary algorithms for both problems with respect to the number of clusters. Secondly, a new mathematical tool called the level-based theorem is presented. The theorem is a high level analytical tool which provides upper bounds on the runtime of a wide range of non-elitist population-based algorithms with independent sampling and using sophisticated high arity variation operators such as crossover. The independence of this new tool from the objective function allows runtime analyses of EAs which use complicated fitness evaluation methods. As an application of the level-based theorem, we conduct, for the first time, runtime analyses of non-elitist genetic algorithms on pseudo-Boolean test functions and also on three classical combinatorial optimisation problems. The last major contribution of this thesis is the illustration of how the level-based theorem can be used to design genetic algorithms with guaranteed runtime bounds. The well-known graph problems Single Source Shortest Path and All-Pairs Shortest Path are used as test beds. The used fitness evaluation method is tailored to incorporate the optimisation approach of a well known problem-specific algorithm and it is rigorously proved that the presented EA optimises both problems efficiently. The thesis is concluded with a discussion of the wider implications of the presented work and future work directions are explored.

Page generated in 0.1199 seconds