• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 5
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Effective visualisation of callgraphs for optimisation of parallel programs: a design study

Mabakane, Mabule Samuel 01 March 2019 (has links)
Parallel programs are increasingly used to perform scientific calculations on supercomputers. Optimising parallel applications to scale well, and ensuring maximum parallelisation, is a challenging task. The performance of parallel programs is affected by a range of factors, such as limited network bandwidth, parallel algorithms, memory latency and the speed of the processors. The term “performance bottlenecks” refers to obstacles that cause slow execution of the parallel programs. Visualisation tools are used to identify performance bottlenecks of parallel applications in an attempt to optimize the execution of the programs and fully utilise the available computational resources. TAU (Tuning and Analysis Utilities) callgraph visualisation is one such tool commonly used to analyse the performance of parallel programs. The callgraph visualisation shows the relationship between different parts (for example, routines, subroutines, modules and functions) of the parallel program executed during the run. TAU’s callgraph tool has limitations: it does not have the ability to effectively display large performance data (metrics) generated during the execution of the parallel program, and the relationship between different parts of the program executed during the run can be hard to see. The aim of this work is to design an effective callgraph visualisation that enables users to efficiently identify performance bottlenecks incurred during the execution of a parallel program. This design study employs a user-centred iterative methodology to develop a new callgraph visualisation, involving expert users in the three developmental stages of the system: these design stages develop prototypes of increasing fidelity, from a paper prototype to high fidelity interactive prototypes in the final design. The paper-based prototype of a new callgraph visualisation was evaluated by a single expert from the University of Oregon’s Performance Research Lab, which developed the original callgraph visualisation tool. This expert is a computer scientist who holds doctoral degree in computer and information science from University of Oregon and is the head of the University of Oregon’s Performance Research Lab. The interactive prototype (first high fidelity design) was evaluated against the original TAU callgraph system by a team of expert users, comprising doctoral graduates and undergraduate computer scientists from the University of Tennessee, United States of America (USA). The final complete prototype (second high fidelity design) of the callgraph visualisation was developed with the D3.js JavaScript library and evaluated by users (doctoral graduates and undergraduate computer science students) from the University of Tennessee, USA. Most of these users have between 3 and 20 years of experience in High Performance Computing (HPC). On the other hand, an expert has more than 20 years of experience in development of visualisation tools used to analyse the performance of parallel programs. The expert and users were chosen to test new callgraphs against original callgraphs because they have experience in analysing, debugging, parallelising, optimising and developing parallel programs. After evaluations, the final visualisation design of the callgraphs was found to be effective, interactive, informative and easy-to-use. It is anticipated that the final design of the callgraph visualisation will help parallel computing users to effectively identify performance bottlenecks within parallel programs, and enable full utilisation of computational resources within a supercomputer.
2

High-Level Control of Agent-based Crowds by means of General Constraints

Jacka, David 01 February 2009 (has links)
The use of virtual crowds in visual eects has grown tremendously since the warring armies of virtual orcs and elves were seen in The Lord of the Rings. These crowds are generated by agent-based simulations, where each agent has the ability to reason and act for itself. This autonomy is eective at automatically producing realistic, complex group behaviour but leads to problems in controlling the crowds. Due to interaction between crowd members, the link between the behaviour of the individual and that of the whole crowd is not obvious. The control of a crowd's behaviour is, therefore, time consuming and frustrating, as manually editing the behaviour of individuals is often the only control approach available. This problem of control has not been widely addressed in crowd simulation research. We propose, implement and test a system in which a user may control the behaviour of a crowd by means of general constraints. This Constraint Satisfaction system automatically alters the behaviour of the individuals in the crowd such that the group behaviour meets the provided constraints. We test this system on a number of scenarios involving dierent types of agents and compare the effectiveness of this automatic system to an expert user manually changing the crowd. We find our method of control, in most cases, to be at least as effective as the expert user.
3

Designing an effective user interface for the Android tablet environment

Chang, Genevieve 01 January 2015 (has links)
With over 1.3 million applications on the Android marketplace, there is increasing competition between mobile applications for customer sales. As usability is a significant factor in an application’s success, many mobile developers refer to the Android design guidelines when designing the user interface (UI). These principles help to provide consistency of navigation and aesthetics, with the rest of the Android platform. However, misinterpretation of the abstract guidelines may mean that patterns and elements selected to organise content of an application do not improve the usability. Therefore, usability tests would be beneficial to ensure that an application meets objectives efficiently and improve on user experience. Usability testing is an important and crucial step in the mobile development process Many freelance developers, however, have limited resources for usability testing, even though the advantages of usability feedback during initial development stages are clear and can save time and money in the long-run. In this thesis, we investigate which method of usability testing is most useful for resource constrained mobile developers. To test the efficacy of Android guidelines, three alternate designs of a unique Android tablet application, Glycano, are developed. High-fidelity paper prototypes were presented to end-users for usability testing and to usability experts for heuristic evaluations. Both usability and heuristic tests demonstrated that following the Android guidelines aids in user familiarity and learnability. Regardless of the different UI designs of the three mockups, Android guidelines provided an initial level of usability by providing familiarity to proficient users and an intuitiveness of certain patterns to new users. However, efficiency in building Glycano schematics was an issue that arose consistently. Testing with end-users and experts, revealed several navigational problems. Usability experts uncovered more general UI problems than the end-user group, who focused more on the content of the application. More refinements and suggestions of additional features to enhance usability and user experience were provided by the experts. Use of usability experts would therefore be most advantageous in initial design stages of an application. Feedback from usability testing is, however, also beneficial and is more valuable than not performing any test at all.
4

Telecommuting in the Developing World: A Case of the Day-Labour Market

Chepken, Christopher 01 January 2013 (has links)
Information and Communication Technologies (ICTs) in general, and mobile phones in particular, have demonstrated positive outcomes in the various social transformation and human development dimensions. As a result, many researchers have focused on ICTs innovations targeting the poor. Among the poor are the low-skilled day-labourers who belong to the Day-labour Market (DLM), which is also made up of employers, job-brokers and intermediary organisations. The DLMs’ main activities involve a great deal of travelling in search of jobs by workers and a search for workers by employers. These travels place heavy economic pressure on the day-labourers, hence reducing their net earnings while they struggle with extreme poverty. The first objective of our study was to find out how and which ICT interventions can be used to alleviate the challenges faced by the DLM stakeholders. The nature of our problem resembled studies that use ICTs to reduce travel distance. Such studies fall under subjects such as teleactivities and teleworking/telecommuting, and advocate for prospects of working anywhere anytime. These studies have not received much research attention in the developing world. They have mainly been done in the developed world, and mostly on white-collar workers and organisations. This brought about our second objective: to find out whether the ICT interventions for the DLM could be studied under teleworking/telecommuting and whether the telecommuting benefits can be realised for the blue-collar workers. Our research methodology was Action Research applying three case studies. We used participant observation and both structured and unstructured interviews for qualitative data collection and questionnaires to collect quantitative data. Contextual inquiry, prototyping and technology probe was applied as our design technique. The prototypes were evaluated in-situ to assess usability and uncover user experience. We mainly employed qualitative data analysis, but where appropriate, triangulated with quantitative data analysis. The research outcomes were divided into three categories: (1) the knowledge on the DLM characteristics which depicted different forms of the DLM and shaped our design process, (2) the DLM software designs tested as prototype applications and software artefacts deployed for use by the DLM and (3) the meaning and the state of telecommuting/teleworking before and after our experiments in the DLM. In the first category, appreciating the challenges faced by our primary target users, the day-labourers, helped shape our designs and our inquiry to include intermediation. With regard to the prototype applications, they included the remote mobile applications and the web-based server side software systems. Although most of these applications where meant for proof of concept, some of them ended up being implemented as fully functional systems. Finally, in the third finding, travel reduction using ICTs (mainly the mobile phones) had been practised by some of the DLM stakeholders even before the commencement of our study. After our intervention, we discovered that implementing telecommuting/teleworking within the DLM may be possible, but with a raft of redefinitions and changes in technology innovations. We therefore identified factors to consider when thinking of implementing telecommuting among blue-collar employees, organisations and employers.
5

A GPU-Based Level of Detail System for the Real-Time Simulation and Rendering of Large-Scale Granular Terrain

Leach, Craig 01 June 2014 (has links)
Real-time computer games and simulations often contain large virtual outdoor environments. Terrain forms an important part of these environments. This terrain may consist of various granular materials, such as sand, rubble and rocks. Previous approaches to rendering such terrains rely on simple textured geometry, with little to no support for dynamic interactions. Recently, particle-based granular terrain simulations have emerged as an alternative method for rendering granular terrain. These systems simulate granular materials by using particles to represent the individual granules, and exhibit realistic, physically correct interactions with dynamic objects. However, they are extremely computationally expensive, and thus may only feasibly be used to simulate small areas of terrain. In order to overcome this limitation, this thesis builds upon a previously created particle-based granular terrain simulation, by integrating it with a heightfield-based terrain system. In this way, we create a level of detail system for simulating large-scale granular terrain. The particle-based terrain system is used to represent areas of terrain around dynamic objects, whereas the heightfield-based terrain is used elsewhere. This allows large-scale granular terrain to be simulated in real-time, with physically correct dynamic interactions. This is made possible by a novel system, which allows for terrain to be converted from one representation to the other in real-time, while maintaining changes made to the particle-based system in the heightfield-based system. The system also allows for updates to particle-systems to be paused, creating the illusion that more particle systems are active than actually are. We show that the system is capable of simulating and rendering multiple particle-based simulations across a large-scale terrain, whilst maintaining real-time performance. However, the number of particles used, and thus the number of particle-based simulations which may be used, is limited by the computational resources of the GPU.

Page generated in 0.0331 seconds