141 |
A study of some M[x]/G/1 type queues with random breakdowns and Bernouilli schedule server vacations based on a single vacation policyMaraghi, Farzana Abdulla January 2008 (has links)
Queueing systems arise in modelling of many practical applications related to computer sciences, telecommunication networks, manufacturing and production, human computer interaction, and so on. The classical queueing system, even vacation queues or queues subject to breakdown, might not be sufficiently realistic. The purpose of this research is to extend the work done on vacation queues and on unreliable queues by studying queueing systems which take into consideration both phenomena. We study the behavior of a batch arrival queueing system with a single server, where the system is subject to random breakdowns which require a repair process, and on the other hand, the server is allowed to take a vacation after finishing a service. The breakdowns are assumed to occur while serving a customer, and when the system breaks down, it enters a repair process immediately while the customer whose service is interrupted comes back to the head of the queue waiting for the service to resume. Server vacations are assumed to follow a Bernoulli schedule under single vacation policy. We consider the above assumptions for different queueing models: queues with generalized service time, queues with two-stages of heterogeneous service, queues with a second optional service, and queues with two types of service. For all the models mentioned above, it is assumed that the service times, vacation times, and repair times all have general arbitrary distributions. Applying the supplementary variable technique, we obtain probability generating functions of queue size at a random epoch for different states of the system, and some performance measures such as the mean queue length, mean waiting time in the queue, proportion of server's idle time, and the utilization factor. The results obtained in this research, show the effect of vacation and breakdown parameters upon main performance measures of interest. These effects are also illustrated using some numerical examples and graphs.
|
142 |
Bandit problems on parallel machinesDunn, Ryan Thomas January 2005 (has links)
Glazebrook and Wilkinson (2000) exploited the primal-dual approach within achievable region methodology to investigate the classical multi-armed bandit problem on identical machines working in parallel. We follow this analysis by utilising and developing elements of the account, given by Bertsimas and Nino-Mora (1996), of the branching bandit model on a single machine cast in the achievable region framework, in order to extend Glazebrook and Wilkinson's work to more general models in the parallel machine environment. We obtain performance guarantees for a range of simple index based heuristic policies for models in which bandits compete for processing by machines of differing speeds and also where the number of available (identical) machines is a stochastic process. From these performance guarantees various forms of asymptotic optimality are established. Numerical studies allow insights concerning the degree of conservatism in the theoretical results.
|
143 |
Towards a crowdsourced solution for the authoring bottleneck in interactive narrativesKriegel, Michael January 2015 (has links)
Interactive Storytelling research has produced a wealth of technologies that can be employed to create personalised narrative experiences, in which the audience takes a participating rather than observing role. But so far this technology has not led to the production of large scale playable interactive story experiences that realise the ambitions of the field. One main reason for this state of affairs is the difficulty of authoring interactive stories, a task that requires describing a huge amount of story building blocks in a machine friendly fashion. This is not only technically and conceptually more challenging than traditional narrative authoring but also a scalability problem. This thesis examines the authoring bottleneck through a case study and a literature survey and advocates a solution based on crowdsourcing. Prior work has already shown that combining a large number of example stories collected from crowd workers with a system that merges these contributions into a single interactive story can be an effective way to reduce the authorial burden. As a refinement of such an approach, this thesis introduces the novel concept of Crowd Task Adaptation. It argues that in order to maximise the usefulness of the collected stories, a system should dynamically and intelligently analyse the corpus of collected stories and based on this analysis modify the tasks handed out to crowd workers. Two authoring systems, ENIGMA and CROSCAT, which show two radically different approaches of using the Crowd Task Adaptation paradigm have been implemented and are described in this thesis. While ENIGMA adapts tasks through a realtime dialog between crowd workers and the system that is based on what has been learned from previously collected stories, CROSCAT modifies the backstory given to crowd workers in order to optimise the distribution of branching points in the tree structure that combines all collected stories. Two experimental studies of crowdsourced authoring are also presented. They lead to guidelines on how to employ crowdsourced authoring effectively, but more importantly the results of one of the studies demonstrate the effectiveness of the Crowd Task Adaptation approach.
|
144 |
Modelling of human control and performance evaluation using artificial neural network and brainwaveSamarnggoon, Keattikorn January 2016 (has links)
Conventionally, a human has to learn to operate a machine by himself/herself. Human Adaptive Mechatronics (HAM) aims to investigate a machine that has the capability to learn its operator skills in order to provide assistance and guidance appropriately. Therefore, the understanding of human behaviour during the human-machine interaction (HMI) from the machine’s side is essential. The focus of this research is to propose a model of human-machine control strategy and performance evaluation from the machine’s point of view. Various HAM simulation scenarios are developed for the investigations of the HMI. The first case study that utilises the classic pendulum-driven capsule system reveals that a human can learn to control the unfamiliar system and summarise the control strategy as a set of rules. Further investigation of the case study is conducted with nine participants to explore the performance differences and control characteristics among them. High performers tend to control the pendulum at high frequency in the right portion of the angle range while the low performers perform inconsistent control behaviour. This control information is used to develop a human-machine control model by adopting an Artificial Neural Network (ANN) and 10-time- 10-fold cross-validation. Two models of capsule direction and position predictions are obtained with 88.3% and 79.1% accuracies, respectively. An Electroencephalogram (EEG) headset is integrated into the platform for monitoring brain activity during HMI. A number of preliminary studies reveal that the brain has a specific response pattern to particular stimuli compared to normal brainwaves. A novel human-machine performance evaluation based on the EEG brainwaves is developed by utilising a classical target hitting task as a case study of HMI. Six models are obtained for the evaluation of the corresponding performance aspects including the Fitts index of performance. The averaged evaluation accuracy of the models is 72.35%. However, the accuracy drops to 65.81% when the models are applied to unseen data. In general, it can be claimed that the accuracy is satisfactory since it is very challenging to evaluate the HMI performance based only on the EEG brainwave activity.
|
145 |
The application of classical architectural criteria to 3D digital library design to enhance the learner experienceMycock, Jane January 2016 (has links)
Purpose – A new development, 3D digital libraries, involves the placement of objects and museum artefacts in a 3D visual context, often supported by learning resources and tools. This study proposes combined design principles from human-computer interaction (HCI) and architectural theory, and considers their impact on the user experience. Methodological approach – Literature reviews into HCI and architectural principles in relevant to the design of 3D digital libraries were conducted, and a model of design criteria combining architecture and HCI applicable in to their design was proposed. A literature review into information seeking behaviour also preceded the design of an experimental methodology to investigate the role of design on behaviour. An applied study using a methodology designed to investigate 3D design’s influence on user behaviour then investigated the usability and usefulness of 3D digital libraries with young people aged 13-15 and the impact of both HCI and dimensional design features on user information behaviour, in particular whether or not they have the capacity to foster curiosity and further engagement with the subject matter. Primary methods are usability tests and semi-structured focus group interviews conducted one week and then at the 7 month interval after initial use of three 3D digital libraries. Findings – The research results in a new 3D design framework for the study of 3D digital libraries combining classical architecture and HCI principles, and offers a tailored methodology to exploring the influence of design on behaviour. Findings and analysis indicate the key features of 3D digital libraries which influence learning and information seeking with 14-15 year-old female students in an English secondary school. The research concludes that 3D digital libraries are appealing and enjoyable to most participants in the 14-15 year-old female user group and thus provide a useful tool for the creation of a learning experience combining both experiential and instructional content. As resources which users perceive they are “visiting”, they are comparable to a school field trip in terms of other forms of learning familiar to school teachers, but they confer extra benefits such as convenience and comfort to both users and teachers. Practical implications – The study provides a new framework for the design and investigation of 3D digital libraries harnessing the wisdom of classical architecture. It indicates benefits of 3D digital libraries to the user group which will be of interest to cultural providers and educators.
|
146 |
Operator functional state modelling and adaptive control of automation in human-machine systemsTorres Salomao, Luis Alberto January 2016 (has links)
In this study, a new modelling and control framework based on type 2 fuzzy logic and validated with real-time experiments on human participants experiencing stress via mental arithmetic cognitive tasks is presented. The ultimate aim of the proposed modelling and control framework is the management and ultimately the prevention of performance breakdown in a human-computer interaction system with a special focus on human performance. This work starts with a literature-based study of previously successful experimental designs, selecting the mental arithmetic operations cognitive task for its ease of implementation and validated through a series of statistical tests on 12 participants as far as its influence on commonly used psychophysiological markers is concerned. Additionally, a new marker for mental stress identification is introduced, the pupil diameter marker; validated with the same series of statistical tests for all 12 participants in the study. For the validation of the introduced modelling and control techniques, two designed experiments which consist of carrying-out arithmetic operations of varying difficulty levels were performed by 10 participants (operators) in the study. With this new technique, effective modelling is achieved through a new adaptive, self-organising and interpretable modelling framework based on General Type-2 Fuzzy sets. This framework is able to learn in real-time through the implementation of a re-structured performance-learning algorithm that identifies important features in the data without the need for prior training. The information learnt by the model is later exploited via an Energy Model Based Controller that infers adequate control actions by changing the difficulty levels of the arithmetic operations in the human-computer-interaction system; these actions being based on the most current psychophysiological state of the subject under study. The successful real-time implementation of the proposed adaptive modelling and control strategies within the framework of the human-machine-interaction under study shows superior performance as compared to other forms of modelling and control, with minimal intervention in terms of model re-training or parameter re-tuning to deal with uncertainties, disturbances and inter/intra-subject parameter variability.
|
147 |
Towards culture-centred design : a metaphor in human computer interactionShen, Siu-Tsen January 2004 (has links)
No description available.
|
148 |
Representing logics in type theoryGardner, Philippa January 1992 (has links)
Computer Science today has many examples of logics given by proof systems. Although one intuitively knows how to use these systems and recognise correct derivations, there is no definitive account which captures this intuition. It is therefore natural to seek a framework for representing logics, which unifies the structure common to all logical systems. We introduce such a framework, called ELF<SUP></SUP>+ and based on the Edinburgh Logical Framework (ELF). The major advantage of ELF<SUP></SUP>&43 is that it allows us to give precise definitions of representation. Such definitions are not possible with ELF since information is lost during encoding; the adequacy theorems of ELF representations are only applicable to particular encodings and cannot be generalised. We rectify this deficiency using the extra distinctions between terms provided by the universes of a pure type system which yields a simple presentation of the type theory of ELF<SUP></SUP>+ . To do this, we extend these type systems to include signatures and βeta-equivalence. Using the ideas underlying representation in ELF<SUP></SUP>+ , we give a standard presentation of the logics under consideration, based on Martin-Lof's notion of judgements and Aczel's work on Frege structures. This presentation forms a reference point from which to investigate representations in ELF<SUP></SUP>+ it is not itself a framework since we do not specify a logic using a finite amount of information. Logics which do not fit this pattern are particularly interesting as they are more difficult, if not impossible, to encode. The syntactic definitions of representations have an elegant algebraic formulation which utilises the abstract view of logics as consequence relations. The properties of the ELF<SUP></SUP>+ entailment relation determine the behaviour of the variables and consequence relations of the logics under consideration. Encodings must preserve this common structure. This motivates the presentation of the logics and their corresponding type theories as strict indexed categories (or split fibrations) so that encodings give rise to indexed functors. The syntactic notions of representation now have a simple formulation as indexed isomorphisms.
|
149 |
The logic of trustHarwood, William January 2012 (has links)
This thesis addresses two problems of trust: 1. Knowledge on Trust: If we are provided with information by a variety of individuals, whom we trust to different degrees, what is the best overall theory we can form from the information we are given? 2. Social Trust: If one does not have direct experience of an individual, how can one establish an initial degree of trust through the offices of society? It addresses the first problem by developing a formal, mathematical and computational, model of Bonjour’s Coherence Theory of Knowledge, and the second by adapting abstract argumentation theory to reason about networks of relationships of trust and distrust between individuals. In developing the latter it develops a notion of generalised argumentation systems, giving their semantics via the Galois Connections induced by binary relations, and provides a general scheme of evaluation of these systems based on propositional model finding. Throughout, some effort is made to set the work in the context of both theories of trust and of the day-to-day trust situations that one encounters in everyday life.
|
150 |
Hardware architecture for a bi-directional protein processor associative memoryQadir, Omer January 2011 (has links)
The evolution of Artificial Intelligence has passed through many phases over the years, going from rigorous mathematical grounding to more intuitive bio-inspired approaches. However, to date, it has failed to pass the Turing test. A popular school of thought is that stagnation in the 1970s and 1980s was primarily due to insufficient hardware resources. However, if this had been the only reason, recent history should have seen AI advancing in leaps and bounds – something that is conspicuously absent. Despite the abundance of AI algorithms and machine learning techniques, the state of the art still fails to capture the rich analytical properties of biological beings or their robustness. Moreover, recent research in neuroscience points to a radically different approach to cognition, with distributed divergent connections rather than convergent ones. This leads one to question the entire approach that is prevalent in the discipline of AI today, so that a re-evaluation of the basic fabric of computation may be in order. In practice, the traditional solution for solving difficult AI problems has always been to throw more hardware at it. Today, that means more parallel cores. Although there are a few parallel hardware architectures that are novel, most parallel architectures – and especially the successful ones – simply combine Von Neumann style processors to make a multi-processor environment. The drawbacks of the Von Neumann architecture are widely published in literature. Regardless, even though the novel architectures may not implement non-Von-Neumann style cores, computation is still based on arithmetic and logic units (ALU). The aim of this research is to explore the possibility of whether an alternative hardware architecture inspired from the biological world, and entirely different from traditional processing, may be better suited for implementing intelligent behaviour while also exhibiting robustness.
|
Page generated in 0.0275 seconds