• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

An authoring and presentation environment for interactive worked examples

Song, Yulun January 2015 (has links)
This dissertation describes an authoring environment, called IWE, which allows a teacher to develop computer-based interactive worked examples without bespoke programming. The focus is on worked examples that involve transforming one representation into another using judgments not algorithms or rules. The worked examples created are all drawn from Computing Science; for example, transforming a requirements specification into an entity-relationship diagram. Teachers model the problem-solving process as a sequence of steps demonstrating how the problem is translated step-by-step into a solution, explaining the decision-making in each step. They can incorporate questions within the examples to increase student engagement and encourage students to do active thinking. Students interact with the transformation process at their own pace to obtain experience of problem-solving. Teachers are able to evolve the examples based on feedback from students and usage data from the system. A review of educational literature identified the best practice guidelines for designing and presenting effective worked examples for novices and faded worked examples for intermediate learners. These guidelines informed the essential requirements of IWE. A prototype authoring environment was designed, implemented and evaluated. Educational literature also recommends using worked examples combined with practice of problem solving. A field study was conducted applying these recommendations to evaluate the usability of IWE. Evaluations were carried out with teachers to assess their ability to create and modify interactive worked examples while the teaching of their courses was in progress. Evaluations were also carried out with students to assess the usability of IWE. The main conclusion of this research, based on analysis of the evaluations, is that the prototype of IWE is useable by both teachers and students. It allows teachers to create interactive worked examples following best practice and evolve existing examples on the basis of feedback. It allows students to use interactive worked examples independently following best practice. Finally, the dissertation identifies some possibilities for widening the scope of this research.
222

Recognition of complex human activities in multimedia streams using machine learning and computer vision

Kaloskampis, Ioannis January 2013 (has links)
Modelling human activities observed in multimedia streams as temporal sequences of their constituent actions has been the object of much research effort in recent years. However, most of this work concentrates on tasks where the action vocabulary is relatively small and/or each activity can be performed in a limited number of ways. In this Thesis, a novel and robust framework for modelling and analysing composite, prolonged activities arising in tasks which can be effectively executed in a variety of ways is proposed. Additionally, the proposed framework is designed to handle cognitive tasks, which cannot be captured using conventional types of sensors. It is shown that the proposed methodology is able to efficiently analyse and recognise complex activities arising in such tasks and also detect potential errors in their execution. To achieve this, a novel activity classification method comprising a feature selection stage based on the novel Key Actions Discovery method and a classification stage based on the combination of Random Forests and Hierarchical Hidden Markov Models is introduced. Experimental results captured in several scenarios arising from real-life applications, including a novel application to a bridge design problem, show that the proposed framework offers higher classification accuracy compared to current activity identification schemes.
223

Validation of queries to a relational database

Hainline, Douglas Ray January 1986 (has links)
This thesis addresses the problem of preventing users of a data base system from interrogating it with query language expressions which are syntactically and semantically valid but which do not match the user's intentions. A method of assisting users of a relational data base to formulate query language expressions which are valid representations of the abstract query which the user wishes to put is developed. The central focus of the thesis is a method of communicating the critical aspects of the semantics of the relation which would be generated in response to a user's proposed operations on the data base. Certain classes of user error which can arise when using a relational algebra query system are identified, and a method of demonstrating their invalidity is demonstrated. This is achieved by representing via a graph the consequences of operations on relations. Also developed are techniques allowing the generation of pseudo-natural language text describing the relations which would be created as the result of the user's proposed query language operations. A method of allowing the creators of data base relations to incorporate informative semantic data about their relations is developed. A method of permitting this data to be modified by query language operations is specified. Pragmatic linguistic considerations which arise when this data is used to generate pseudo-natural language statements are addressed, and examples of the system's use are given.
224

Using cultural familiarity for usable and secure recognition-based graphical passwords

Aljahdali, Hani Moaiteq January 2015 (has links)
Recognition-based graphical passwords (RBGPs) are a promising alternative to alphanumeric passwords for user authentication. The literature presented several schemes in order to find the best types of pictures in terms of usability and security. This thesis contributes the positive use of cultural familiarity with pictures for usable and secure recognition-based graphical passwords in two different countries: Scotland and Saudi Arabia. This thesis presents an evaluation of a culturally-familiar graphical password scheme (CFGPS). This scheme is based on pictures that represent the daily life in different cultures. Those pictures were selected from a database containing 797 pictures representing the cultures of 30 countries. This database was created as the first step in this thesis from the responses of 263 questionnaires. The evaluation of the scheme goes through five phases: registration phase, usability phase, security phase, interviews phase, and guidelines phase. In the registration phase, a web-based study was conducted to determine the cultural familiarity impact on choosing the pictures for the GPs. A large number of participants (Saudi and Scottish) registered their GPs. The results showed that users were highly affected by their culture when they chose pictures for their GPs; however, the Saudis were significantly more affected by their culture than the Scottish. This study showed the developers the importance of having a selection of pictures that are as familiar as possible to users in order to create suitable GPs. In the usability phase, the participants were asked to log in with their GPs three months after the registration phase. The main results showed that the memorability rate for GPs consisting only of pictures belonging to participants’ culture was higher than the memorability rate for GPs consisting of pictures that did not belong to participants’ culture. However, there was no evidence regarding a cultural familiarity effect on login time. In the security phase, a within-subject user study was conducted to examine the security of culturally-familiar GPs against educated guessing attacks. This study was also the first attempt to investigate the risk of using personal information shared by users on social networks to guess their GPs. The results showed high guessability for CFGPs. The interviews phase evaluated the qualitative aspects of the CFGP password in order to improve its performance. In-depth interviews with the users of the scheme suggested guidelines for both developers and users to increase the usability and security of the scheme. Those guidelines are not exclusive to the culturally-familiar scheme, as they can be used for all RBGP schemes. Finally, as one of the instructions stated in the developers’ guidelines, different challenge sets’ designs were evaluated based on their cultural familiarity to users. The results showed a high usability of the culturally-familiar challenge set while the security target was met in the culturally-unfamiliar challenge set. To balance between these two factors, following the user guidelines covered the weaknesses of both designs.
225

Constraint specification by example in a meta-CASE tool

Qattous, Hazem Kathem January 2011 (has links)
Meta-CASE tools offer the ability to specialise and customise diagram-based software modelling editors. Constraints play a major role in these specialisation and customisation tasks. However, constraint definition is complicated. This thesis addresses the problem of constraint specification complexity in meta-CASE tools. Constraint Specification by Example (CSBE), a novel variant of Programming by Example, is proposed as a technique that can simplify and facilitate constraint specification in meta-CASE tools. CSBE involves a user presenting visual examples of diagrams to the tool which engages in a synergistic interaction with the user, based on system inference and additional user input, to arrive at the user’s intended constraint. A prototype meta-CASE tool has been developed that incorporates CSBE. This prototype was used to perform several empirical studies to investigate the feasibility and potential advantages of CSBE. An empirical study was conducted to evaluate the performance in terms of effectiveness, efficiency and user satisfaction of CSBE compared to a typical form-filling technique. Results showed that users using CSBE correctly specified significantly more constraints and required less time to accomplish the task. Users reported higher satisfaction when using CSBE. A second empirical online study has been conducted with the aim of discovering the preference of participants for positive or negative natural language polarity when expressing constraints. Results showed that subjects preferred positive constraint expression over negative expression. A third empirical study aimed to discover the effect of example polarity (negative vs. positive) on the performance of CSBE. A multi-polarity tool offering both positive and negative examples scored significantly higher correctness in a significantly shorter time to accomplish the task with a significantly higher user satisfaction compared to a tool offering only one example polarity. A fourth empirical study examined user-based addition of new example types and inference rules into the CSBE technique. Results demonstrated that users are able to add example types and that performance is improved when they do so. Overall, CSBE has been shown to be feasible and to offer potential advantages compared to other commonly-used constraint specification techniques.
226

Using program behaviour to exploit heterogeneous multi-core processors

McIlroy, Ross January 2010 (has links)
Multi-core CPU architectures have become prevalent in recent years. A number of multi-core CPUs consist of not only multiple processing cores, but multiple different types of processing cores, each with different capabilities and specialisations. These heterogeneous multi-core architectures (HMAs) can deliver exceptional performance; however, they are notoriously difficult to program effectively. This dissertation investigates the feasibility of ameliorating many of the difficulties encountered in application development on HMA processors, by employing a behaviour aware runtime system. This runtime system provides applications with the illusion of executing on a homogeneous architecture, by presenting a homogeneous virtual machine interface. The runtime system uses knowledge of a program's execution behaviour, gained through explicit code annotations, static analysis or runtime monitoring, to inform its resource allocation and scheduling decisions, such that the application makes best use of the HMA's heterogeneous processing cores. The goal of this runtime system is to enable non-specialist application developers to write applications that can exploit an HMA, without the developer requiring in-depth knowledge of the HMA's design. This dissertation describes the development of a Java runtime system, called Hera-JVM, aimed at investigating this premise. Hera-JVM supports the execution of unmodified Java applications on both processing core types of the heterogeneous IBM Cell processor. An application's threads of execution can be transparently migrated between the Cell's different core types by Hera-JVM, without requiring the application's involvement. A number of real-world Java benchmarks are executed across both of the Cell's core types, to evaluate the efficacy of abstracting a heterogeneous architecture behind a homogeneous virtual machine. By characterising the performance of each of the Cell processor's core types under different program behaviours, a set of influential program behaviour characteristics is uncovered. A set of code annotations are presented, which enable program code to be tagged with these behaviour characteristics, enabling a runtime system to track a program's behaviour throughout its execution. This information is fed into a cost function, which Hera-JVM uses to automatically estimate whether the executing program's threads of execution would benefit from being migrated to a different core type, given their current behaviour characteristics. The use of history, hysteresis and trend tracking, by this cost function, is explored as a means of increasing its stability and limiting detrimental thread migrations. The effectiveness of a number of different migration strategies is also investigated under real-world Java benchmarks, with the most effective found to be a strategy that can target code, such that a thread is migrated whenever it executes this code. This dissertation also investigates the use of runtime monitoring to enable a runtime system to automatically infer a program's behaviour characteristics, without the need for explicit code annotations. A lightweight runtime behaviour monitoring system is developed, and its effectiveness at choosing the most appropriate core type on which to execute a set of real-world Java benchmarks is examined. Combining explicit behaviour characteristic annotations with those characteristics which are monitored at runtime is also explored. Finally, an initial investigation is performed into the use of behaviour characteristics to improve application performance under a different type of heterogeneous architecture, specifically, a non-uniform memory access (NUMA) architecture. Thread teams are proposed as a method of automatically clustering communicating threads onto the same NUMA node, thereby reducing data access overheads. Evaluation of this approach shows that it is effective at improving application performance, if the application's threads can be partitioned across the available NUMA nodes of a system. The findings of this work demonstrate that a runtime system with a homogeneous virtual machine interface can reduce the challenge of application development for HMA processors, whilst still being able to exploit such a processor by taking program behaviour into account.
227

Numerical simulation of the instabilities of a 2D collapsible channel flow

Liu, Haofei January 2010 (has links)
Collapsible channel flows that originated from physiological applications have many intriguing dynamic system behaviours. In this thesis, the stability of a two-dimensional collapsible channel flow is studied numerically. Three approaches are adopted to investigate the fluid-structure interaction problem: an in-house Finite Element Method (FE) based Fluid-Beam model (FBM), a commercial FE based code, ADINA, and an eigensolver derived from the FBM (linear analysis). Two types of inlet boundary conditions are considered. One is the flow-driven system where the inlet flow rate is specified, and the other is the pressure-driven where the pressure drop is given. It turns out that these two systems yield very different dynamical features even though the steady solutions are the same. For the flow-driven system, a range of steady solutions are studied with both zero and non-zero initial wall tension by means of both FBM (using initial stress configuration) and ADINA (equipped with both initial strain and initial stress configurations). As expected, the FBM agrees with ADINA when using the initial stress configuration, but not when the initial strain configuration is adopted. This established the importance of the initial configuration. The effects of different wall thicknesses on the steady wall performance have also been shown as significant. Fully-coupled unsteady simulations have also been performed with FBM (Bernoulli-Euler beam) and ADINA (Timoshenko beam) to demonstrate significant influences of modelling assumptions on the dynamical behaviour. In addition to unsteady simulations, linear stability analysis is also carried out to identify the critical parameter values that occur when the system is in the neutrally stable state. Using the faster Fourier transform, the unsteady results are then compared with the linear stability analysis results. Excellent agreements are achieved in terms of frequencies of modes of instabilities. Finally, we focus on the dynamical behaviour of collapsible channel flows in a pressure-driven system, and the differences with those of the flow-driven system (Luo et al. 2008). It is found that the stability structure for the pressure-driven system is no longer cascade as in the flow-driven case. Instead, the mode-1 instability is the dominating unstable mode in the pressure-driven system. In the pressure drop and wall stiffness space, neutrally stable mode-2 curve is completely enclosed by the mode-1 neutral curve, and there is no purely mode-2 unstable solution in the parameter space investigated. Interesting mode-switch is also observed. By analysing the energy budgets at the neutral stable points, we confirmed that in the high tension region (on the upper branch of the mode-1 neutral curve), the stability mechanism is the same as that of Jensen & Heil (2003). Namely, self-excited oscillations can grow by extracting kinetic energy from the mean flow, with exactly two thirds of the net kinetic energy flux dissipated by the dissipations and the remainder balanced by increased dissipation in the mean flow. However, the mechanism doesn’t apply for the lower branch of the mode-1 neutral curve. In addition, energy balance changes further for the mode-2 curves in the flow-driven system. It is clear that different mechanisms are operating in different regions of the parameter space, and for different boundary conditions.
228

Detecting worm mutations using machine learning

Sharma, Oliver January 2008 (has links)
Worms are malicious programs that spread over the Internet without human intervention. Since worms generally spread faster than humans can respond, the only viable defence is to automate their detection. Network intrusion detection systems typically detect worms by examining packet or flow logs for known signatures. Not only does this approach mean that new worms cannot be detected until the corresponding signatures are created, but that mutations of known worms will remain undetected because each mutation will usually have a different signature. The intuitive and seemingly most effective solution is to write more generic signatures, but this has been found to increase false alarm rates and is thus impractical. This dissertation investigates the feasibility of using machine learning to automatically detect mutations of known worms. First, it investigates whether Support Vector Machines can detect mutations of known worms. Support Vector Machines have been shown to be well suited to pattern recognition tasks such as text categorisation and hand-written digit recognition. Since detecting worms is effectively a pattern recognition problem, this work investigates how well Support Vector Machines perform at this task. The second part of this dissertation compares Support Vector Machines to other machine learning techniques in detecting worm mutations. Gaussian Processes, unlike Support Vector Machines, automatically return confidence values as part of their result. Since confidence values can be used to reduce false alarm rates, this dissertation determines how Gaussian Process compare to Support Vector Machines in terms of detection accuracy. For further comparison, this work also compares Support Vector Machines to K-nearest neighbours, known for its simplicity and solid results in other domains. The third part of this dissertation investigates the automatic generation of training data. Classifier accuracy depends on good quality training data -- the wider the training data spectrum, the higher the classifier's accuracy. This dissertation describes the design and implementation of a worm mutation generator whose output is fed to the machine learning techniques as training data. This dissertation then evaluates whether the training data can be used to train classifiers of sufficiently high quality to detect worm mutations. The findings of this work demonstrate that Support Vector Machines can be used to detect worm mutations, and that the optimal configuration for detection of worm mutations is to use a linear kernel with unnormalised bi-gram frequency counts. Moreover, the results show that Gaussian Processes and Support Vector Machines exhibit similar accuracy on average in detecting worm mutations, while K-nearest neighbours consistently produces lower quality predictions. The generated worm mutations are shown to be of sufficiently high quality to serve as training data. Combined, the results demonstrate that machine learning is capable of accurately detecting mutations of known worms.
229

Hardware and software aspects of parallel computing

Bissland, Lesley January 1996 (has links)
Part 1 (Chapters 2,3 and 4) is concerned with the development of hardware for multiprocessor systems. Some of the concepts used in digital hardware design are introduced in Chapter 2. These include the fundamentals of digital electronics such as logic gates and flip-flops as well as the more complicated topics of rom and programmable logic. It is often desirable to change the network topology of a multiprocessor machine to suit a particular application. The third chapter describes a circuit switching scheme that allows the user to alter the network topology prior to computation. To achieve this, crossbar switches are connected to the nodes, and the host processor (a PC) programs the crossbar switches to make the desired connections between the nodes. The hardware and software required for this system is described in detail. Whilst this design allows the topology of a multiprocessor system to be altered prior to computation, the topology is still fixed during program run-time. Chapter 4 presents a system that allows the topology to be altered during run-time. The nodes send connection requests to a control processor which programs a crossbar switch connected to the nodes. This system allows every node in a parallel computer to communicate directly with every other node. The hardware interface between the nodes and the control processor is discussed in detail, and the software on the control processor is also described. Part 2 (Chapters 5 and 6) of this thesis is concerned with the parallelisation of a large molecular mechanics program. Chapter 5 describes the fundamentals of molecular mechanics such as the steric energy equation and its components, force field parameterisation and energy minimisation. The implementation of a novel programming (COMFORT) and hardware (the BB08) environment into a parallel molecular mechanics (MM) program is presented in Chapter 6. The structure of the sequential version of the MM program is detailed, before discussing the implementation of the parallel version using COMFORT and the BB08.
230

Resistance to change : a functional analysis of reponses to technical change in a Swiss bank

Bauer, Martin January 1993 (has links)
This thesis demonstrates the signal function and diagnostic value of user resistance in a software development project. Its starting point is the critical analysis of managerial common sense which negates resistance, or sees resistance to change as a 'nuisance' and as the manifestation of an individual or structural 'deficiency'; these notions prohibit change agents from appreciating the signal function of resistance to change in organisational processes. The first source of evidence is the literature on impacts, attitudes, and acceptance of information technology internationally and in particular in Switzerland. The second source is the tradition of psychological field theory which I reconstruct as the 'feeding the reluctant eater' paradigm, a form of social engineering. The third source is an empirical study of the semantics (semantic differential and free associations) of 'resistance to change' among management trainees in the UK, Switzerland and the USA (N=388). The thesis develops and investigates a concept of resistance that is based a pain analogy, and on the notions of self-monitoring and self-active systems. An organization which is implementing new technology is a self-active system that directs and energetizes its activities with the help of internal and external communication. The functional analogy of the organismic pain system and resistance to change is explored. The analogy consists of parallel information processing, filtering and recoding of information, a bimodal pattern of attention over time, and the functions of attention allocation, evaluation, alteration and learning. With this analogy I am able to generate over 50 hypotheses on resistance to change and its effects on organisational processes. The evidence for some of these hypotheses is explored in an empirical study of a Swiss banking group. The implemention of computer services between 1983 and 1991 is reconstructed in the central bank and 24 branches. Data includes the analysis of two opinion surveys (1985 n=305; 1991 n=326), documents (n=134), narrative interviews (n=34), job analyses (n=34), field observations and performance data (n=24). A method is developed to describe the varying structure of organisational information processing through time. The content analysis allows me to describe when in relation to the action, how intense, and in what manner 'resistance' becomes an issue between 1983 and 1991. The fruitfulness of the pain analogy is demonstrated (a) by shifting the analysis of resistance from structure to process and to that of an independent rather than to that of a dependent variable; (b) by shifting the focus from from motivation to communication; (c) by eroding the a priori assumption that resistance is a nuisance; and (d) by indicating the diagnostic value of "bad news" in organisational communication; resistance is diagnostic information; it shows us when, where and why things go wrong.

Page generated in 0.0732 seconds