• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 7
  • Tagged with
  • 26
  • 26
  • 16
  • 10
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Comparison of Socially-Motivated Discussion Forum Models for Learning Management Systems

Almukhaylid, Maryam 01 December 2017 (has links)
This thesis seeks to contribute to the field of learning management system (LMS) development in tertiary educational institutions, particularly to advance the adoption of learning management systems (LMSes) by exploring the incorporation of socially-motivated discussion forum models. This study proposes a Web-based application, which includes four different discussion forum models for LMSes, in order to test usability and student preferences. The purpose of this study was to compare two non-social discussion forums and two social discussion forums, to determine their appropriateness in terms of attributes or features and general functionality for LMSes. The design processes led to the creation of a Web-based application called 4DFs, which includes four different discussion forum models. Two of these models are non-social discussion forums: the chat room unstructured model and the traditional general threaded discussion. The other two types are social discussion forums, where users can choose who they converse with: the Twitter-style short comment feed and the Facebook-style. The chat room and the traditional general threaded discussion forums’ features are based on those of Sakai, since the research sample was comprised of students from the University of Cape Town (UCT). The Twitter-style and Facebook-style elements, such as retweets, hashtags, likes and reposts, are based on Twitter and Facebook. A pilot study was conducted to discover any errors or issues with the experimental procedure. A controlled experiment was then conducted with 31 students from the institution. Participants had to fill out a background information survey to gather some demographic information and to understand more about participants’ previous experiences using chat rooms, discussion forums, and social media applications for university related purposes and for non-university related purposes. Following that, participants were given tasks to test all the features of the different discussion forum models. To avoid bias in the participants’ choosing of discussion forum models, the research was conducted with a Counterbalanced Measures Design. Participants had to fill in the System Usability Scale (SUS) questionnaire in conjunction with their use of the Web-based application. Then, after using all discussion forums, they had to fill out a preferences questionnaire that asked about their preferences of the discussion forums and the features. The Twitter-style short comment feed model was preferred in terms of the ease of use and since participants were familiar with this forum. This was followed by the chat room unstructured model and the traditional general threaded discussion in terms of these forums’ ease of use and students’ preference for the layout. The Facebook-style was less preferable. Also, participants indicated that the post button, reply button, edit, delete, and search button were more beneficial features. Participants mention that the layout of the chat room unstructured model was not optimal, since the massive amount of text made it confusing and unclear to decipher. Participants suggested that including the uploading of media, allowing private chat, adding extra features for important posts, and using a repost button in the discussion forums would be more useful. The study found that students preferred that the learning forum include certain characteristics; they prioritised ease of use, less complexity, less interaction and a user-friendly interface over their familiarity with the forum. For learning, there is a need to use the features for a specific purpose so users do not necessarily want extra fancy features (like emojis), instead they want systems that help them to learn efficiently.
2

SHOP-Net: Moving from Paper to Mobile

Talbot, Michael 01 January 2011 (has links)
Stock-ordering is one of the challenges that microenterprises face, because shop owners often need to leave their shops to travel to suppliers of goods. Triple Trust Organization (TTO) is a non-profit organization (NGO) that works with microenterprises around Cape Town and addresses this problem. They act as a supplier and fetch stock orders from shops they work with. Their ordering system relied on paper order forms and had a number of inefficiencies. To address these inefficiencies, a mobile-based stock-ordering system was designed with TTO. This system allows orders to be recorded and sent to a server at the TTO office using a mobile phone application where the orders are then processed. This system successfully increased TTO’s efficiency in three ways, namely, improved data processing ability, increased order accuracy and increased access to information. The evaluation was done according to their success criteria and the system has been in use for seven months. We argue that evaluations with NGOs should go further than just the management but include all of those affected by the system.
3

VRBridge: a Constructivist Approach to Supporting Interaction Design and End-User Authoring in Virtual Reality

Winterbottom, Cara 01 June 2010 (has links)
For any technology to become widely-used and accepted, it must support end-user authoring and customisation. This means making the technology accessible by enabling understanding of its design issues and reducing its technical barriers. Our interest is in enabling end-users to author dynamic virtual environments (VEs), specifically their interactions: player interactions with objects and the environment; and object interactions with each other and the environment. This thesis describes a method to create tools and design aids which enable end-users to design and implement interactions in a VE and assist them in building the requisite domain knowledge, while reducing the costs of learning a new set of skills. Our design method is based in constructivism, which is a theory that examines the acquisition and use of knowledge. It provides principles for managing complexity in knowledge acquisition: multiplicity of representations and perspectives; simplicity of basic components; encouragement of exploration; support for deep reflection; and providing users with control of their process as much as possible. We derived two main design aids from these principles: multiple, interactive and synchronised domain-specific representations of the design; and multiple forms of non-invasive and user-adaptable scaffolding. The method began with extensive research into representations and scaffolding, followed by investigation of the design strategies of experts, the needs of novices and how best to support them with software, and the requirements of the VR domain. We also conducted a classroom observation of the practices of non-programmers in VR design, to discover their specific problems with effectively conceptualising and communicating interactions in VR. Based on our findings in this research and our constructivist guidelines, we developed VRBridge, an interaction authoring tool. This contained a simple event-action interface for creating interactions using trigger-condition-action triads or Triggersets. We conducted two experimental evaluations during the design of VRBridge, to test the effectiveness of our design aids and the basic tool. The first tested the effectiveness of the Triggersets and additional representations: a Floorplan, a Sequence Diagram and Timelines. We used observation, interviews and task success to evaluate how effectively end-users could analyse and debug interactions created with VRBridge. We found that the Triggersets were effective and usable by novices to analyse an interaction design, and that the representations significantly improved end-user work and experience. The second experiment was large-scale (124 participants) and conducted over two weeks. Participants worked on authoring tasks which embodied typical interactions and complexities in the domain. We used a task exploration metric, questionnaires and computer logging to evaluate aspects of task performance: how effectively end-users could create interactions with VRBridge; how effectively they worked in the domain of VR authoring; how much enjoyment or satisfaction they experienced during the process; and how well they learned over time. This experiment tested the entire system and the effects of the scaffolding and representations. We found that all users were able to complete authoring tasks using VRBridge after very little experience with the system and domain; all users improved and felt more satisfaction over time; users with representations or scaffolding as a design aid completed the task more expertly, explored more effectively, felt more satisfaction and learned better than those without design aids; users with representations explored more effectively and felt more satisfaction than those with scaffolding; and users with both design aids learned better but did not improve in any other way over users with a single design aid. We also gained evidence about how the scaffolding, representations and basic tool were used during the evaluation. The contributions of this thesis are: an effective and efficient theory-based design method; a case study in the use of constructivism to structure a design process and deliver effective tools; a proof-of-concept prototype with which novices can create interactions in VR without traditional programming; evidence about the problems that novices face when designing interactions and dealing with unfamiliar programming concepts; empirical evidence about the relative effectiveness of additional representations and scaffolding as support for designing interactions; guidelines for supporting end-user authoring in general; and guidelines for the design of effective interaction authoring systems in general.
4

Fast Galactic Structure Finding using Graphics Processing Units

Wood, Daniel 01 June 2014 (has links)
Cosmological simulations are used by astronomers to investigate large scale structure formation and galaxy evolution. Structure nding, that is, the discovery of gravitationally-bound objects such as dark matter halos, is a crucial step in many such simulations. During recent years, advancing computational capacity has lead to halo-nders needing to manage increasingly larger simulations. As a result, many multi-core solutions have arisen in an attempt to process these simulations more eciently. However, a many-core approach to the problem using graphics processing units (GPUs) appears largely unexplored. Since these simulations are inherently n-body problems, they contain a high degree of parallelism, which makes them very well suited to a GPU architecture. Therefore, it makes sense to determine the potential for further research in halo-nding algorithms on a GPU. We present a number of modified algorithms, for accelerating the identication of halos and sub-structures, using entry-level graphics hardware. The algorithms are based on an adaptive hierarchical renement of the friends-of-friends (FoF) method using six phase-space dimensions: This allows for robust tracking of sub-structures. These methods are highly amenable to parallel implementation and run on GPUs. We implemented four separate systems; two on GPUs and two on CPUs. The first system for both CPU and GPU was implemented as a proof of concept exercise to familiarise us with the problem: These utilised minimum spanning trees (MSTs) and brute force methods. Our second implementation, for the CPU and GPU, capitalised on knowledge gained from the proof of concept applications, leading us to use kd-trees to efficiently solve the problem. The CPU implementations were intended to serve as benchmarks for our GPU applications. In order to verify the efficacy of the implemented systems, we applied our halo finders to cosmological simulations of varying size and compared the results obtained to those given by a widely used FoF commercial halo-finder. To conduct a fair comparison, CPU benchmarks were implemented using well-known libraries optimised for these calculations. The best performing implementation, with minimal optimisation, used kd-trees on the GPU. This achieved a 12x speed-up over our CPU implementation, which used similar methods. The same GPU implementation was compared with a current, widely-used commercial halo finder FoF system, and achieved a 2x speed-up for up to 5 million particles. Results suggest a scalable solution, where speed-up increases with the size of dataset used. We conclude that there is great potential for future research into an optimised kd-tree implementation on graphics hardware for the problem of structure finding in cosmological simulations.
5

Acceleration of the noise suppression component of the DUCHAMP source-finder.

Badenhorst, Scott 01 January 2015 (has links)
The next-generation of radio interferometer arrays - the proposed Square Kilometre Array (SKA) and its precursor instruments, The Karoo Array Telescope (MeerKAT) and Australian Square Kilometre Pathfinder (ASKAP) - will produce radio observation survey data orders of magnitude larger than current sizes. The sheer size of the imaged data produced necessitates fully automated solutions to accurately locate and produce useful scientific data for radio sources which are (for the most part) partially hidden within inherently noisy radio observations (source extraction). Automated extraction solutions exist but are computationally expensive and do not yet scale to the performance required to process large data in practical time-frames. The DUCHAMP software package is one of the most accurate source extraction packages for general (source shape unknown) source finding. DUCHAMP's accuracy is primarily facilitated by the à trous wavelet reconstruction algorithm, a multi-scale smoothing algorithm which suppresses erratic observation noise. This algorithm is the most computationally expensive and memory intensive within DUCHAMP and consequently improvements to it greatly improve overall DUCHAMP performance. We present a high performance, multithreaded implementation of the à trous algorithm with a focus on `desktop' computing hardware to enable standard researchers to do their own accelerated searches. Our solution consists of three main areas of improvement: single-core optimisation, multi-core parallelism and the efficient out-of-core computation of large data sets with memory management libraries. Efficient out-of-core computation (data partially stored on disk when primary memory resources are exceeded) of the à trous algorithm accounts for `desktop' computing's limited fast memory resources by mitigating the performance bottleneck associated with frequent secondary storage access. Although this work focuses on `desktop' hardware, the majority of the improvements developed are general enough to be used within other high performance computing models. Single-core optimisations improved algorithm accuracy by reducing rounding error and achieved a 4 serial performance increase which scales with the filter size used during reconstruction. Multithreading on a quad-core CPU further increased performance of the filtering operations within reconstruction to 22 (performance scaling approximately linear with increased CPU cores) and achieved 13 performance increase overall. All evaluated out-of-core memory management libraries performed poorly with parallelism. Single-threaded memory management partially mitigated the slow disk access bottleneck and achieved a 3.6 increase (uniform for all tested large data sets) for filtering operations and a 1.5 increase overall. Faster secondary storage solutions such as Solid State Drives or RAID arrays are required to process large survey data on `desktop' hardware in practical time-frames.
6

Investigating the Impact of Organised Technology-driven Orchestration on Teaching

Phiri, Lighton 01 October 2018 (has links)
Orchestration of learning involves the real-time management of activities performed by educators in learning environments, with a particular focus on the effective use of technology. While different educational settings present unique problems, the common challenges have been noted to primarily be as a result of multiple heterogeneous activities and their associated intrinsic and extrinsic constraints. In addition to these challenges, this thesis argues that the complexities of orchestration are further amplified due to the ad hoc nature of the approaches and techniques used to orchestrate learning activities. The thesis proposes a streamlined approach to technology-driven orchestration of learning, in order to address these challenges and complexities. Specifically, the thesis proposes an organised approach that focuses on three core aspects of orchestration: activity management, resource management and sequencing of learning activities. Orchestration was comprehensively explored in order to identify the core aspects essential for streamlining technology-driven orchestration. Proof-of-concept orchestration toolkits, based on the proposed orchestration approach, were implemented and evaluated in order to assess the feasibility of the approach, its effectiveness and its potential impact on the teaching experience. Comparative analysis and guided orchestration controlled studies were conducted to compare the effectiveness of ad hoc orchestration with streamlined orchestration and to measure the orchestration load, respectively. In addition, a case study of a course that employed a flipped classroom strategy was conducted to assess the feasibility of the proposed approach. The feasibility was further assessed by integrating a workflow, based on the proposed approach, that facilitates the sharing of reusable orchestration packages. The results from the studies suggest that the streamlined approach is more effective when compared to ad hoc orchestration and has a potential to provide a positive user experience. The results also indicate that the approach imposes acceptable orchestration load during scripting of learning activities. Case studies conducted in authentic educational settings suggest that the approach is feasible, and potentially applicable to useful practical usage scenarios. The long-term implications are that streamlining of technology-driven orchestration could potentially improve the effectiveness of educators when orchestrating learning activities.
7

RFI Monitoring for the MeerKAT Radio Telescope

Schollar, Christopher 01 January 2015 (has links)
South Africa is currently building MeerKAT, a 64 dish radio telescope array, as a pre-cursor for the proposed Square Kilometre Array (SKA). Both telescopes will be located at a remote site in the Karoo with a low level of Radio Frequency Interference (RFI). It is important to maintain a low level of RFI to ensure that MeerKAT has an unobstructed view of the universe across its bandwidth. The only way to effectively manage the environment is with a record of RFI around the telescope. The RFI management team on the MeerKAT site has multiple tools for monitoring RFI. There is a 7 dish radio telescope array called KAT7 which is used for bi-weekly RFI scans on the horizon. The team has two RFI trailers which provide a mobile spectrum and transient measurement system. They also have commercial handheld spectrum analysers. Most of these tools are only used sporadically during RFI measurement campaigns. None of the tools provided a continuous record of the environment and none of them perform automatic RFI detection. Here we design and implement an automatic, continuous RFI monitoring solution for MeerKAT. The monitor consists of an auxiliary antenna on site which continuously captures and stores radio spectra. The statistics of the spectra describe the radio frequency environment and identify potential RFI sources. All of the stored RFI data is accessible over the web. Users can view the data using interactive visualisations or download the raw data. The monitor thus provides a continuous record of the RF environment, automatically detects RFI and makes this information easily accessible. This RFI monitor functioned successfully for over a year with minimal human intervention. The monitor assisted RFI management on site during RFI campaigns. The data has proved to be accurate, the RFI detection algorithm shown to be effective and the web visualisations have been tested by MeerKAT engineers and astronomers and proven to be useful. The monitor represents a clear improvement over previous monitoring solutions used by MeerKAT and is an effective site management tool.
8

A Methodology for Analyzing Power Consumption in Wireless Communication Systems

Chibesakunda, Mwelwa K. 01 March 2004 (has links)
Energy usage has become an important issue in wireless communication systems. The energy-intensive nature of wireless communication has spurred concern over how best systems can make the most use of this non-renewable resource. Research in energy-efficient design of wireless communication systems show that one of its challenges is that the overall performance of the system depends, in a coupled way, on the different submodules of the system i.e. antenna, power amplifier, modulation, error control coding, and network architecture. Network architecture implementation strategies offer protocol software implementors an opportunity of incorporating low-power strategies into the design of the network protocols used for data communication. This dissertation proposes a methodology that would allow a software protocol implementor to analyze the power consumption of a wireless communication system. The foundation of this methodology lies in the understanding of the formal specification of the wireless interface network architecture which can be used to predict the performance of the system. By extending this hypothesis, a protocol implementor can use the formal specification to derive the power consumption behaviour of the wireless system during a normal operation (transmission or reception of data). A high-level formalism like state-transition graphs, can be used to track the protocol processing behaviour and to derive the associated continuous-time Markov chains. Because of their diversity, Markov reward models(MRM) are used to model the power consumption associated with the different states of a specified protocol layer. The models are solved analytically using the Mobius performance and dependability tool. Using the MRM accumulation and utilization measures, a profile of the power consumption is generated. Results from the experiments on the protocol layers show the individual power consumption and utilization of the different states as well as the accumulated power consumption of different protocol layers when compared. Ultimately, the results from the reward model solution can be used in the energy-efficient design of wireless communication systems. Lastly, in order to get an idea of how wireless communication device companies handle issues of power consumption, we consulted with the wireless module engineers at Siemens Communication South Africa and present our findings on current practices in energy efficient protocol implementation.
9

Identifikation und Analyse von Effektivitätskriterien, Rahmenbedingungen und Dimensionen der Organisationsstruktur deutscher Fakultäten

Hagerer, Ilse 26 February 2021 (has links)
Profound changes in higher education induced by the reforms of the New Public Management (NPM) have led to administrative growth, professionalism, managerialism, and higher relevance of the economic principle. As a further consequence, the importance of the organizational perspective has increased. Thus, questions about effective organizational structures and their design frameworks have become more crucial. The contingency approach provides insights into these questions because it investigates the effectiveness of organizational structures in different situations. Based on expert interviews with faculty managers and subsequent qualitative content analysis, in-depth insights are gained regarding criteria of effectiveness of German faculties, dimensions of their organizational structure, and their contextual factors from their subjective perception. Furthermore, it is possible to renew the approach, to refute criticism, and to build a reference framework as a precursor for developing new theories.
10

An End-to-End Solution for Complex Open Educational Resources

Mohamed Nour, Morwan 01 November 2012 (has links)
Open access and open resources have gained much attention from the world in the last few years. The interest in sharing information freely by the use of the World Wide Web has grown rapidly in many different fields. Now, information is available in many different data forms because of the continuous evolution in technology. The main objective of this thesis is to provide content creators and educators with a solution that simplifies the process of depositing into digital repositories. We created a desktop tool named ORchiD, Open educational Resources Depositor, to achieve this goal. The tool encompasses educational metadata and content packaging standards to create packages while conforming to a deposit protocol to ingest resources to repositories. A test repository was installed and adapted to handle Open Educational Resources. The solution proposed is centered on the front-end application which handles the complex objects on the user desktop. The desktop application allows the user to select and describe his/her resource(s) then creates the package and forwards it to the specified repository using the deposit protocol. The solution is proved to be simple for users but also in need of further improvements specifically in association to the metadata standard presented to user.

Page generated in 0.0282 seconds