• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Security and collaborative groupware tools usage

AlAdraj, Resala A. January 2015 (has links)
This thesis investigates the usage problems of Online Collaborative Groupware (OCG) tools for learning at the University of Bahrain (UOB) in the Kingdom of Bahrain. An initial study revealed that the main problems faced by students when they use OCG tools in the learning process are security and trust. SWFG (Skype, Wiki, Facebook, and Gmail) tools were proposed as being effective and commonly used OCG tools for learning. A quasi-experiment has been done with UOB students to identify the perceptions of the students towards security, privacy and safety relating to use of SWFG tools. Based on this experiment the researcher has derived the following results:  Secure Skype has a positive relationship with Skype usage;  Private Skype has a positive relationship with Skype trust;  Secure Gmail has a negative relationship with Gmail usage and trust;  Wiki usage has a negative relationship with trust in Wikis. Additionally, the research revealed that students may be more motivated to use OCG tools if the security and privacy of these tools was to be improved. The thesis also focuses on security and trust within email. In order to evaluate the usage of secure emails, students‘ awareness of the secure email awareness was investigated using quantitative and qualitative methods. The results of this evaluation informed the design of an experiment that was then conducted by tracking secure email usage and gathering information about the students‘ usage and awareness of their secure emails. The aim of this activity was to identify a clear representation of secure email usage over specified periods for both academic and non-academic purposes by students in both the UK and Bahrain. It has been concluded from this experiment that there are differences between the usage of secure email in each country when applied to both academic and non-academic purposes. Finally, based on these results, the researcher developed a framework which derives from the Technology Acceptance Model (TAM) model by testing security and trust effects on the ease of use and on usefulness. A case study has been conducted using a new secure email instructional model in order to validate the research framework. The study found that security provided by webmails and students‘ trust affects the webmail‘s perceived usefulness, and that in turn this leads to ease of use regardless of which type of email client is used. However, it was not proof that usefulness affects the usage of email. Evidence suggests that the model may be a suitable solution for increasing the usefulness of email in Computer Supported Collaborative Learning (CSCL), and can help to strengthen communication between faculty and students. This study has contributed valuable knowledge and information in this particular field of study. It has been able to gather a satisfactory amount of information from both students and teachers in both the University of Bahrain (UOB) and the University of Warwick (UOW). A number of different methods were used in this task – interviews, questionnaires, observations, experiments and student feedback, amongst others. The entire study was conducted in a way that it would empirically evaluate different dimensions of secure Online Collaborative Groupware (OCG) tools usage in the educational environment. The research framework applied in this investigation provided many insights into OCG tools. These new insights and information may be used to test and validate the framework with a large number of students.
32

Distributed empirical modelling and its application to software system development

Sun, Pi-Hwa January 1999 (has links)
Empirical Modelling (EM) is a new appro~h for software system development (SSO) that is particularly suitable for ill-defined, open systems. By regarding a software system as a computer model, EM aims to acquire and construct the knowledge associated with the intended system by situated modelling in which the modeller interacts with the computer model through continuous observations and experiments in an open-ended manner. In this way, a software system can be constructed that takes account of its context and is adaptable to the rapidly changing environment in which the system is developed and used. This thesis develops principles and tools for distributed Empirical Modelling (OEM). It proposes a framework for OEM by drawing on two crucial theories in social science: distributed cognition and ethnomethodology. This framework integrates cognitive and social processes, allowing multiple modellers to work collaboratively to explore, expand, experience and communicate their knowledge through interaction with their networked computer models. The concept of pretend play is proposed, whereby modellers as internal observers can interact with each other by acting in the role of agents within the intended system in order to shape the agency of such agents. The author has developed a tool called dtkeden to support the proposed OEM framework. Technical issues arising from the implementation dtkeden and case-studies in its use are discussed. The popular star-type logical configuration network and the client/server· communication technique are exploited to construct the network environment of this tool. A protocol has been devised and embedded into their communication mechanism to achieve synchronisation of computer models. Four interaction modes have been implemented into dtkeden to provide modellers with different forms of interpersonal interaction. In addition, using a virtual agent concept that was initially devised to allow definitions of different contexts to co-exist in a computer model, a definitive script can be interpreted as a generic observable that can serve as a reusable definitive pattern. Like experience in everyday life, this definitive pattern can be reused by particularising and adapting it to a specific context. A comparison between generic observables and abstract data types for reuse is given. The application of the framework for OEM to requirements engineering is proposed. The requirements engineering process (REP) - currently poorly understood - is reviewed. To integrate requirements engineering with SSD, this thesis suggests reengineering the REP by taking the context into account. On the basis of OEM, a framework (called SPORE) for the REP is established to guide the process of cultivating requirements in a situated manner. Examples of the use of this framework are presented, and comparisons with other approaches to RE are made.
33

An approach to formal reasoning about programs

Hitchcock, Peter January 1974 (has links)
This thesis presents a formal apparatus which is adequate both to express the termination and correctness properties of programs and also the necessary induction rules and axioms of their domains. He explore the applications of this formalism with particular emphasis on providing a basis for formalising the stepwise-development of programs. The formalism provides, in some sense, the minimal extension into a second order theory that is required. It deals with binary relations between tuples and the minimal fixpoints of monotone and continuous functionals on them. The correspondence between common constructs in programming languages and this formalism is shown in an informal manner. To show correctness of a program it is necessary to find an expression for its termination properties which will depend on the induction rules for the data structures of the program. We show how these rules may be formally expressed and manipulated to derive other induction rules, and give a technique for mechanically deriving from a schema an expression for its domain which may be expressed in terms of given induction rules by the manipulations referred to above. We give axiomatic definitions, including an induction rule, for some domains, which commonly occur in programs, these being finite sets, trees, structures, arrays with fixed bounds, LISP S-expressions, linear lists, and the integers. In developing a program one may start by defining the basic operations and domains in an axiomatic manner. Development proceeds by finding satisfactory representations for this domain in terms of more specific domains and their operations, until finally one has domains which are representable in a target language. We discuss what is meant by a representation in an attempt to formalise this technique of data refinement, and also mention the less general notion of simulation which requires that a representation is adequate tor a particular program to work. A program may have been developed in a recursive manner and if the target language does not contain recursion as a basic primitive it will be necessary to simulate it using stacks. We give axioms for such stacks, and give a mechanical procedure for obtaining from any recursive program, a flowchart program augmented by stacks, which simulates it.
34

Bayesian structural inference with applications in social science

Goudie, Robert J. B. January 2011 (has links)
Structural inference for Bayesian networks is useful in situations where the underlying relationship between the variables under study is not well understood. This is often the case in social science settings in which, whilst there are numerous theories about interdependence between factors, there is rarely a consensus view that would form a solid base upon which inference could be performed. However, there are now many social science datasets available with sample sizes large enough to allow a more exploratory structural approach, and this is the approach we investigate in this thesis. In the first part of the thesis, we apply Bayesian model selection to address a key question in empirical economics: why do some people take unnecessary risks with their lives? We investigate this question in the setting of road safety, and demonstrate that less satisfied individuals wear seatbelts less frequently. Bayesian model selection over restricted structures is a useful tool for exploratory analysis, but fuller structural inference is more appealing, especially when there is a considerable quantity of data available, but scant prior information. However, robust structural inference remains an open problem. Surprisingly, it is especially challenging for large n problems, which are sometimes encountered in social science. In the second part of this thesis we develop a new approach that addresses this problem|a Gibbs sampler for structural inference, which we show gives robust results in many settings in which existing methods do not. In the final part of the thesis we use the sampler to investigate depression in adolescents in the US, using data from the Add Health survey. The result stresses the importance of adolescents not getting medical help even when they feel they should, an aspect that has been discussed previously, but not emphasised.
35

Addressing parallel application memory consumption

Perks, Oliver F. J. January 2013 (has links)
Recent trends in computer architecture are furthering the gap between CPU capabilities and those of the memory system. The rise of multi-core processors is having a dramatic effect on memory interactions, not just with respect to performance but crucially to capacity. The slow growth of DRAM capacity, coupled with configuration limitations, is driving up the cost of memory systems as a proportion of total HPC platform cost. As a result, scientific institutions are increasingly interested in application memory consumption, and in justifying the cost associated with maintaining high memory-per-core ratios. By studying the scaling behaviour of applications, both in terms of runtime and memory consumption, we are able to demonstrate a decrease in workload efficiency in low memory environments, resulting from poor memory scalability. Current tools are lacking in performance and analytical capabilities motivating the development of a new suite of tools for capturing and analysing memory consumption in large scale parallel applications. By observing and analysing memory allocations we are able to record not only how much but more crucially where and when an application uses its memory. We use use this analysis to look at some of the key principles in application scaling such as processor decomposition, parallelisation models and runtime libraries, and their associated effects on memory consumption. We demonstrate how the data storage model of OpenMPI implementations inherently prevents scaling due to memory requirements, and investigate the benefits of different solutions. Finally, we show how by analysing information gathered during application execution we can automatically generate models to predict application memory consumption, at different scale and runtime configurations. In addition we predict, using these models, how implementation changes could affect the memory consumption of an industry strength benchmark.
36

Intelligent support for group work in collaborative learning environments

Liu, Shuangyan January 2012 (has links)
The delivery of intelligent support for group work is a complex issue in collaborative learning environments. This particularly pertains to the construction of effective groups and assessment of collaboration problems. This is because the composition of groups can be affected by several variables, and various methods are desirable for ascertaining the existence of different collaboration problems. Literature has shown that current collaborative learning environments provide limited or no support for teachers to cope with these tasks. Considering this and the increasing use of online collaboration, this research aims to explore solutions for improving the delivery of support for group work in collaborative learning environments, and thus to simplify how teachers manage collaborative group work. In this thesis, three aspects were investigated to achieve this goal. The first aspect emphasises on proposing a novel approach for group formation based on students‘ learning styles. The novelty and importance of this approach is the provision of an automatic grouping method that can tailor to individual students‘ characteristics and fit well into the existing collaborative learning environments. The evaluation activities comprise the development of an add-on tool and an undergraduate student experiment, which indicate the feasibility and strength of the proposed approach — being capable of forming diverse groups that tend to perform more effectively and efficiently than similar groups for conducting group discussion tasks. The second focus of this research relates to the identification of major group collaboration problems and their causes. A nationwide survey was conducted that reveals a student perspective on the issue, which current literature fails to adequately address. Based on the findings from the survey, an XML-based representation was created that provides a unique perspective on the linkages between the problems and causes identified. Finally, the focus was then shifted to the proposal of a novel approach for diagnosing the major collaboration problems identified. The originality and significance of this approach lies in the provision of various methods for ascertaining the existence of different collaboration problems identified, based on student interaction data that result from the group work examined. The evaluation procedure focused on the development of a supporting tool and several experiments with a test dataset. The results of the evaluation show that the feasibility and effectiveness are sustained, to a great extent, for the diagnostic methods addressed. Besides these main proposals, this research has explored a multi-agent architecture to unify all the components derived for intelligently managing online collaborative learning, which suggests an overarching framework providing context for other parts of this thesis.
37

Manual and automatic authoring for adaptive hypermedia

Foss, Jonathan G. K. January 2012 (has links)
Adaptive Hypermedia allows online content to be tailored specifically to the needs of the user. This is particularly valuable in educational systems, where a student might benefit from a learning experience which only displays (or recommends) content that they need to know. Authoring for adaptive systems requires content to be divided into stand-alone fragments which must then be labelled with sufficient pedagogical metadata. Authors must also create a pedagogical strategy that selects the appropriate content depending on (amongst other things) the learner's profile. This authoring process is time-consuming and unfamiliar to most non-technical authors. Therefore, to ensure that students (of all ages, ability level and interests) can benefit from Adaptive Educational Hypermedia, authoring tools need to be usable by a range of educators. The overall aim of this thesis is therefore to identify the ways that this authoring process can be simplified. The research in this thesis describes the changes that were made to the My Online Teacher (MOT) tool in order to address issues such as functionality and usability. The thesis also describes usability and functionality changes that were made to the GRAPPLE Authoring Tool (GAT), which was developed as part of a European FP7 project. These two tools (which utilise different authoring paradigms) were then used within a usability evaluation, allowing the research to draw a comparison between the two toolsets. The thesis also describes how educators can reuse their existing non-adaptive (linear) material (such as presentations and Wiki articles) by importing content into an adaptive authoring system.
38

Supporting the migration from construal to program : rethinking software development

Pope, Nicolas William January 2011 (has links)
Creative software design, where there is no theory, no pre-computer precedent, no set of requirements or even necessarily an objective, challenges all existing software development methods. There can be no assumption that end-users know what they want. Each and every situation is unique, unpredictable and due to feedback is continually changing. Fixed solutions developed by non-domain experts are all but impossible in more unconventional systems, and increasingly there may not be domain experts at all. Allowing individuals or groups of non-professionals to program is one approach (End-User Development). However, programming requires a degree of formality, design and specification that cannot co-exist with the most informal pre-theoretical applications which need to be developed by exploratory experimentation to help with problem-solving and sense-making. Instead of programming a finished application from the beginning, there is a need to develop personal, provisional and subjective models and evolve these into public, objective and assured applications. Developing these models \on-line" through interactive experimentation is essential and it is the objective of Empirical Modelling (EM) research to enable the modelling of sense-making artefacts called construals. Whilst existing EM tools are able to support construals there is a need to see how a smooth transition from construals to applications can be made. Such a migration is not one-way as the resulting applications need to remain plastic. The aim of this thesis is to explore and develop ways of enhancing EM principles and tools to better support such migrations from construals to programs. By first identifying key characteristics of construals and associated principles and techniques, along with a critique of the existing EM tool, a new kind of environment for plastic software development is proposed. A major contribution of this thesis is the development of such a prototype environment which is illustrated using a collection of artefacts developed within it. From the prototype, called Cadence, an informal and a formal idealised account was elicited to provide a framework for this kind of development activity. The ideas explored in the thesis have the potential to impact upon the operating systems community and the everyday computer user in radical ways if taken forward. The thesis demonstrates that applications can be developed from construals without a translation step, keeping the resulting applications plastic.
39

Predictive dynamic resource allocation for web hosting environments

Al Ghamdi, Mohammed A. January 2012 (has links)
E-Business applications are subject to significant variations in workload and this can cause exceptionally long response times for users, the timing out of client requests and/or the dropping of connections. One solution is to host these applications in virtualised server pools, and to dynamically reassign compute servers between pools to meet the demands on the hosted applications. Switching servers between pools is not without cost, and this must therefore be weighed against possible system gain. This work is concerned with dynamic resource allocation for multi-tiered, clusterbased web hosting environments. Dynamic resource allocation is reactive, that is, when overloading occurs in one resource pool, servers are moved from another (quieter) pool to meet this demand. Switching servers comes with some overhead, so it is important to weigh up the costs of the switch against possible system gains. In this thesis we combine the reactive behaviour of two server switching policies – the Proportional Switching Policy (PSP) and the Bottleneck Aware Switching Policy (BSP) – with the proactive properties of several workload forecasting models. We evaluate the behaviour of the two switching policies and compare them against static resource allocation under a range of reallocation intervals (the time it takes to switch a server from one resource pool to another) and observe that larger reallocation intervals have a negative impact on revenue. We also construct model- and simulation-based environments in which the combination of workload prediction and dynamic server switching can be explored. Several different (but common) predictors – Last Observation (LO), Simple Average (SA), Sample Moving Average (SMA) and Exponential Moving Average (EMA), Low Pass Filter (LPF), and an AutoRegressive Integrated Moving Average (ARIMA) – have been applied alongside the switching policies. As each of the forecasting schemes has its own bias, we also develop a number of meta-forecasting algorithms – the Active Window Model (AWM), the Voting Model (VM), the Selective Model (SM), the Dynamic Active Window Model (DAWM), and a method based on Workload Pattern Analysis (WPA). The schemes are tested with real-world workload traces from several sources to ensure consistent and improved results. We also investigate the effectiveness of these schemes on workloads containing extreme events (e.g. flash crowds). The results show that workload forecasting can be very effective when applied alongside dynamic resource allocation strategies.
40

Evaluating the performance of legacy applications on emerging parallel architectures

Pennycook, Simon J. January 2012 (has links)
The gap between a supercomputer's theoretical maximum ("peak") floating-point performance and that actually achieved by applications has grown wider over time. Today, a typical scientific application achieves only 5-20% of any given machine's peak processing capability, and this gap leaves room for significant improvements in execution times. This problem is most pronounced for modern "accelerator" architectures - collections of hundreds of simple, low-clocked cores capable of executing the same instruction on dozens of pieces of data simultaneously. This is a significant change from the low number of high-clocked cores found in traditional CPUs, and effective utilisation of accelerators typically requires extensive code and algorithmic changes. In many cases, the best way in which to map a parallel workload to these new architectures is unclear. The principle focus of the work presented in this thesis is the evaluation of emerging parallel architectures (specifically, modern CPUs, GPUs and Intel MIC) for two benchmark codes - the LU benchmark from the NAS Parallel Benchmark Suite and Sandia's miniMD benchmark - which exhibit complex parallel behaviours that are representative of many scientific applications. Using combinations of low-level intrinsic functions, OpenMP, CUDA and MPI, we demonstrate performance improvements of up to 7x for these workloads. We also detail a code development methodology that permits application developers to target multiple architecture types without maintaining completely separate implementations for each platform. Using OpenCL, we develop performance portable implementations of the LU and miniMD benchmarks that are faster than the original codes, and at most 2x slower than versions highly-tuned for particular hardware. Finally, we demonstrate the importance of evaluating architectures at scale (as opposed to on single nodes) through performance modelling techniques, highlighting the problems associated with strong-scaling on emerging accelerator architectures.

Page generated in 0.1575 seconds