61 |
Generating space-time composite images for GIS dataHolness, Carolyn January 2006 (has links)
Includes bibliographical references. / For many decades scientists have been collecting marine data from survey boats that spend weeks at sea covering large areas and collecting data at regular points along the journey. With the introduction of remotely sensed satellite images, large amounts of environmental data (e.g. sea surface temperature) is now available to be used in studies on the effect of the ocean on phenomena observed on such journeys. Up until recently, mapping this environmental data from the satellite images has been done by averaging daily images for the duration of the survey to obtain a single image approximating the entire time span and the whole area covered in the survey. The problem with this however, is that in creating these temporal composite images, values are averaged out so that small variations that occur over short time scales are lost. In an effort to overcome this problem, the idea of a spatio-temporal composite image was proposed. This entails using pieces of daily images, where each piece represents the area covered by the survey on the same date, and merging these pieces to create a single image. This allows the original values from the satellite images to be retained which provides a much more realistice mapping of the environment. Satellite images and survey data can be viewed and analysed in a Geographical Information Systems (GIS), however creating these spatio-temporal composite images manually using the software is a time consuming and difficult task. This project investigates whether it is possible to develop an application to automatically create these images. On completion of the project it was found that not only was this possible, but that it also provided the means to create these images more accurately and far quicker than could be done manually and it was also easy to use even without prior GIS experience.
|
62 |
Extensibility in end-user network applications : a feature or a flaw?Zimba, Brian Ackim January 2011 (has links)
Includes bibliographical references (leaves 52-54). / The rise in global connectivity driven by user-demand is bringing about a wave of end-user interconnectivity applications. This, coupled with the improvements in software security that have forced a shift from syntactic to semantic attacks points at an every-growing likelihood of attacks targeting the human element. Literature predicts the extensibility of applications as presenting a growing threat, but the context of this threat beyond the web-browser model, remains unclear and uncharted. This work examines the possible threat extensibility poses in this developing context of greater enduser connectivity.
|
63 |
User perception of gaming element effectiveness in a corporate learning applicationArnold, Henry January 2017 (has links)
This Conversion Masters in Information Technology thesis gathered users' perceptions about eight gaming elements to determine their effectiveness on aspects of playability, enjoyment and intrinsic motivation needed in a gamified corporate learning application. The study focused on user opinions about a Progress Bar, Individual Leaderboard, Departmental Leaderboard, Timer, In-Game Currency, Badges, Storyline/Theme and Avatar. A gamification application containing these gaming elements was designed and developed to make the evaluation. The application entailed users learning four Information Technology Infrastructure Library (ITIL) processes needed to manage an information technology department in a telecommunications company. The application design process considered the business goals, rules, target behaviours, time limits, rewards, feedback, levels, storytelling, interest, aesthetics, replay or do-overs, user types, activity cycles, fun mechanisms and development tools needed to create a coherent, addictive, engaging and fun user experience. Player types were determined using the Brainhex online survey. Federoff's Game Playability Heuristics model was used to measure the users' perceptions about the playability of the application. Sweetser and Wyeth's Gameflow model was used to measure perceptions about the gaming elements' contribution toward creating an enjoyable experience. Malone and Lepper's Taxonomy of Intrinsic Motivation for Learning was used to measure the gaming elements' ability to promote an intrinsically motivating learning environment. Masterminds, Achievers, Conquerors and Seekers were the most prominent player types found in the Brainhex online survey for which the gamification application design then catered. The staff in the department play-tested the application to evaluate the gaming elements. Overall the Storyline/Theme, suited to Seekers and Masterminds, ranked as the most effective gaming element in this study. The users perceived artwork as an essential component of a gamified learning application. The Individual Leaderboard, suited to Conquerors, ranked very closely as the second most effective gaming element. The Storyline/Theme and Individual Leaderboard both performed the strongest against the criteria measuring the playability. The Storyline/Theme was by far the strongest from a gameflow perspective and the Individual Leaderboard from a motivation perspective. The Avatars ranked the worst across all the measurement criteria. Based on quiz results, 86 percent of the staff in the department had learned the material from the gamified training prototype developed in this work. The findings from this study will therefore serve as input for developing a full-scale gamification learning application.
|
64 |
Measuring the applicability of open data standards to a single distributed organisation : an application to the COMESA SecretariatMunalula, Themba January 2008 (has links)
Includes abstract. / Includes bibliographical references (leaves 41-43). / This dissertation develops data metrics that represent the extent to which data standards can be applied to an organization's data. The research identified key issues that affect data interoperability or the feasibility of a move towards interoperability. This research tested the unwritten rule that organizational setups tend to regard and design data requirements more from internal needs than interoperability needs. Essentially, by generating metrics that affect a number of data attributes, the research quantified the extent of the gap that exists between organizational data and data standards.
|
65 |
Towards a system redesign for better performance and customer satisfaction : a case study of the ICTS helpdesk at the University of Cape TownBalikuddembe, Joseph Kibombo January 2005 (has links)
Includes bibliographical references. / This paper presents the findings from a study, which was carried out to investigate how the design of knowledge management systems could be improved for enhanced performance and greater customer satisfaction. The ICTS Department's helpdesk at the University of Cape Town, South Africa, was the venue for this case study. The study set out to meet the following objectives: - undertaking a knowledge acquisition strategy by carrying out a systems evaluation and analysis of the existing web-based user support system, - suggesting a knowledge representation model for an adaptive web-based user support system, and - developing and testing an online troubleshooter prototype for an improved knowledge use support system. To achieve the objectives of the study, knowledge engineering techniques were deployed on top of a qualitative research design. Questionnaires, which were supplemented by interview guides and observations, were the research tools used in gathering the data. In addition to this, a representative sample of the ICTS clientele and management was interviewed. It was discovered that poorly designed knowledge management systems cause frustration among the clientele who interact with the system. Specifically, it was found that the language used for knowledge representation plays a vital role in determining how best users can interpret knowledge items in a given knowledge domain. In other words, knowledge modelling and representation can improve knowledge representation if knowledge engineering techniques are appropriately followed in designing knowledge based systems. It was concluded that knowledge representation can be improved significantly if, firstly, the ontology technique is embraced as a mechanism of knowledge representation. Secondly, using hierarchies and taxonomies improves navigability in the knowledge structure. Thirdly, visual knowledge representation that supplements textual knowledge adds more meaning to the user, and is such a major and important technique that it can even cater for novice users.
|
66 |
Expert system adjudication of hospital data in HIV disease managementJoseph, Asma January 2012 (has links)
Includes abstract. / Includes bibliographical references. / HIV Disease Management Programs (DMP's) are comprehensive programs that are designed to manage the HIV infected patient's treatment in an integrated manner.
|
67 |
Decision tree classifiers for incident call data setsIgboamalu, Frank Nonso January 2017 (has links)
Information technology (IT) has become one of the key technologies for economic and social development in any organization. Therefore the management of Information technology incidents, and particularly in the area of resolving the problem very fast, is of concern to Information technology managers. Delays can result when incorrect subjects are assigned to Information technology incident calls: because the person sent to remedy the problem has the wrong expertise or has not brought with them the software or hardware they need to help that user. In the case study used for this work, there are no management checks in place to verify the assigning of incident description subjects. This research aims to develop a method that will tackle the problem of wrongly assigned subjects for incident descriptions. In particular, this study explores the Information technology incident calls database of an oil and gas company as a case study. The approach was to explore the Information technology incident descriptions and their assigned subjects; thereafter the correctly-assigned records were used for training decision tree classification algorithms using Waikato Environment for Knowledge Analysis (WEKA) software. Finally, the records incorrectly assigned a subject by human operators were used for testing. The J48 algorithm gave the best performance and accuracy, and was able to correctly assign subjects to 81% of the records wrongly classified by human operators.
|
68 |
A comparison of a factor-based investment strategy and machine learning for predicting excess returns on the JSEDrue, Stefan 28 February 2020 (has links)
This study investigated the application of Machine Learning to portfolio selection by comparing the application of a Factor Based Investment strategy to one using a Support Vector Machine performing a classification task. The Factor Based Strategy uses regression in order to identify factors correlated to returns, by regressing excess returns against the factor values using historical data from the JSE. A portfolio-sort method is used to construct portfolios. The machine learning model was trained on historical share data from the Johannesburg Stock Exchange. The model was tasked with classifying whether a share over or under performed relative to the market. Shares were ranked according to probability of over-performance and divided into equally weighted quartiles. The excess return of the top and bottom quartiles was used to calculate portfolio payoff, which is the basis for comparison. The experiments were divided into time periods to assess the consistency of the factors over different market conditions. The time periods were defined as pre-financial crisis, during the financial crisis, post financial crisis and over the full period. The study was conducted in the context of the Johannesburg Stock Exchange. Historical data was collected for a 15-year period - from May 2003 to May 2018 - on the constituents of the All Share Index (ALSI). A rolling window methodology was used where the training and testing window was shifted with each iteration over the data. This allowed for a larger number of predictions to be made and for a greater period of comparison with the factorbased strategy. Fourteen factors were used individually as the basis for portfolio construction. While combinations of factors into Quality, Value and Liquidity and Leverage categories was used to investigate the effect of additional inputs into the model. Furthermore, experiments using all factors together were performed. It was found that a single factor FBI can consistently outperform the market, a multi factor FBI also provided consistent excess returns, but the SVM provided consistently larger excess returns with a wide range of factor inputs and beat the FBI in 12 of the 14 different experiments over different time periods.
|
69 |
Introduction to Java programming for the high school studentTweedie, Sinclair January 2004 (has links)
Bibliography: p. 130-146. / The objective of this project was to evaluate the effectiveness of teaching high school students the Java language utilising Java classes. These classes were designed to simplify the syntax of the language and to introduce the concept of inheritance. Two Java classes were created. The main class used an artefact called a Tortoise, based on the Logo idea of a Turtle, and provided a number of graphical methods for the user. The second class was called "Please" and simplifed the Java syntax using a number of class methods which required a very straightforward English-like syntax.
|
70 |
Video quality requirements for South African Sign Language communications over mobile phones.Erasmus, Daniel January 2012 (has links)
Includes abstract. / Includes bibliographical references. / This project aims to find the minimum video resolution and frame rate that supports intelligible cell phone based video communications in South African Sign Language.
|
Page generated in 0.3119 seconds