• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 198
  • 198
  • 198
  • 26
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

THE FEASIBILITY OF AN EFFECTIVE DATA WAREHOUSING SOLUTION FOR A TERTIARY INSTITUTION

Nazir, Amer Bin 12 October 2009 (has links)
Even though industry in South Africa has utilized data warehousing technologies successfully for a number of years, tertiary institutions have lagged behind. This can in part be attributed to the high costs involved, many failures in the past and the fact that the decision makers of these institutions are unaware of what data warehousing is and the advantages it can bring. Several factors, however, are forcing tertiary institutions in the direction of data warehousing. They need all the help they can get to make this process as easy as possible. Most of the tertiary institutions that still survive today came through periods of tough rationalizations and mergers. In order to stay alive and competitive, they have grown through the years and have developed into large businesses in and of themselves. On the one hand they had to make ends meet with subsidies from government that became less and less and on the other hand they had to provide more and more detailed statistics to the government. This change has resulted in a more business-like management of these institutions. Strategic decision making has now become of the utmost importance to tertiary institutions to meet the frequent changes in the government funding structure. The University of the Free State initially tried to accomplish that with an online transaction processing system developed in-house. These systems, however, are designed to optimize transactional processing and the features which increase the efficiency of these systems are generally those which also make it difficult to extract information. When that did not work, a new online transaction processing system was bought from an international company at a huge cost. During the course of data transfer from the old to the new system (with a different database design) numerous data conversion errors generated anomalies and a lack of integrity in the database. The new system also proved inadequate to provide the necessary statistics required by the Department of Education. A system was subsequently purchased that utilized ASCII files prepared by the online transaction processing system which generated fixed reports according to the Department of Education requirements. This system provided a workable solution, but with changes in requirements, new reports need to be developed continuously. It was also worthless for institutional planning and forecasting. This study reported the advantages and disadvantages of the current systems in use to provide statistics to the Department of Education. It then proposes a new system based on data warehousing principles. The dimensional star schema design for a data warehouse is provided. The methods used to transfer, load and extract data are discussed in detail. The data warehouse solution is then compared to the current solutions. The conclusion is that a data warehouse is a feasible solution for the strategic information problems tertiary institutions are facing today. An effective management information system using data warehousing can be developed in-house with low budgets, institutional data can be fitted into dimensional modelling star schemas, and error free data can be provided to end-users by developing proper extraction, transformation and loading packages. The data surfaced to end-users from relational online analytical processing can provide statistics to government and can be used for general planning and forecasting purposes.
112

ENHANCING THE USER EXPERIENCE FOR A WORD PROCESSOR APPLICATION THROUGH VISION AND VOICE

Beelders, Tanya René 10 November 2011 (has links)
Multimodal interfaces may herald a significant improvement on current GUIs which have been commonplace until now. It is also possible that a multimodal interface could provide a more intuitive and natural means of interaction which, simultaneously, negates the reliance on traditional, manual means of interaction. Eye gaze and speech are common components of natural human-human communication and were proposed for use in a multimodal interface for a popular word processor for the purposes of this study. In order for a combination of eye gaze and speech to be a viable interface for a word processor, it must provide a means of text entry and facilitate editing and formatting of the document contents. For the purposes of this study a simple speech grammar was used to activate common word processing tasks, as well as for selection of text and navigation through a document. For text entry, an onscreen keyboard was provided, the keys of which could be pressed by looking at the desired key and then uttering an acceptable verbal command. These functionalities were provided in an adapted Microsoft Word 2007® to increase the customisability and possibly the usability of the word processor interface and to provide alternative means of interaction. The proposed interaction techniques also had to be able to execute typical mouse actions, such as point-and-click. The usability of eye gaze and speech was determined using longitudinal user testing and a set of tasks specific to the functionality. Results indicated that the use of a gravitational well increased the usability of the speech and eye gaze combination when used for pointing-and-clicking. The use of a magnification tool did not increase the usability of the interaction technique. The gravitational well did, however, result in more incorrect clicks due to natural human behaviour and the ease of target acquisition afforded by the gravitational well. However, participants learnt how to use the interaction technique over the course of time, although the mouse remained the superior pointing device. Speech commands were found to be as usable, or even more usable, than the keyboard and mouse for editing and selection purposes, although navigation was hindered to some extent. For text entry purposes, the keyboard far surpasses eye gaze and speech in terms of performance as an input method as it is both faster and results in fewer errors than eye gaze and speech. However, even though the participants were required to complete a number of sessions and a number of text entry tasks per session, more practice may be required for using eye gaze and speech for text entry. Subjectively, participants felt comfortable with the multimodal interface and also indicated that they felt improvement as they progressed through their sessions. Observations of the participants also indicated that as time passed, the participants became more adept at using the multimodal interface for all necessary interactions. In conclusion, eye gaze and speech can be used instead of a pointing device and speech commands are recommended for use within a word processor in order to accomplish common tasks. For the purposes of text entry, more practice is advocated before a recommendation can be made. Together with progress in hardware development and availability, this multimodal interface may allow the word processor to further exploit emerging technologies and be a forerunner in the use of multimodal interfaces in other applications.
113

USING MOBILE LEARNING APPLICATIONS TO ENCOURAGE ACTIVE CLASSROOM PARTICIPATION: TECHNICAL AND PEDAGOGICAL CONSIDERATIONS

Khomokhoana, Pakiso Joseph 11 November 2011 (has links)
Higher education institutions are experiencing burgeoning growth in student enrolment. The subsequent increase in undergraduate class sizes means that the needs of individual students are no longer effectively addressed. Students are also less likely to actively participate in these large classes. There is a high probability that such students are less likely to be successful in their studies. In order to support the learning needs of the student population, there are various strategies and tools that can be used to encourage active classroom participation. This study investigated how mobile learning applications can be used to encourage active participation in large undergraduate Computer Science classes. The study identified the four main teaching and learning challenges that are experienced by lecturers and students in large undergraduate courses. They are lack of resources, facilitation of student assessment and feedback, pressure to increase student throughput and the academic under preparedness of students. In this study, the researcher established that it is not easy to address these challenges if a traditional teacher-centred approach is used. The main reason is that this approach is ineffective to support the construction of conceptual understanding by students. Upon consideration of various teaching and learning issues, a student-centred approach was identified as being a more promising approach for quality teaching and successful learning in the 21st century. In a teaching and learning environment where a student-centred approach is practiced, active classroom participation was identified as one viable solution that has the potential to lower the intensity of the four stated challenges. The researcher demonstrated how active classroom participation could mitigate the effects of these challenges. Some of the active participation strategies identified from contemporary literature were also implemented by the lecturer in her classes. On realisation that it is not easy to implement active classroom participation strategies, especially in large classes, the researcher opted for applications that could automate some of these strategies. He specifically decided to use mobile learning applications because in this era, most of the students own cellular phones. The researcher believed that the existing applications could not help him to address the research questions and objectives of this study. He opted for a custom developed application, called MobiLearn. Technical and pedagogical usability of this application were then evaluated in terms of the metrics established from literature. Technical usability was evaluated in terms of 12 metrics and pedagogical usability was evaluated in terms of nine metrics. The study employed the mixed methods design, and the approach was mainly qualitative with some quantitative enhancements. Data was collected through focus group discussions held with voluntary participants from the selected population; questionnaire survey; extracting it from the application (usage data); a face-to-face interview with the lecturer who used the MobiLearn application in her classes as well as class attendance records. Qualitative data was analysed according to qualitative content analysis principles, while quantitative data was analysed by means of statistical analysis. The application was evaluated as both technically and pedagogically usable. It was also evident to have potential to encourage active classroom participation for students who use it. Some students indicated that they experienced some technical problems to access the MobiLearn application. They indicated that they were not motivated to use the application. To address the last (third) objective of this study to mitigate problems such as these experienced by MobiLearn users, the study compiled a set of technical and pedagogical guidelines for best practices in the use of mobile learning applications to encourage active participation in similar contexts.
114

An object oriented model of machine vision

Brown, Gary January 1997 (has links)
In this thesis an object oriented model is proposed that satisfies the requirements for a generic, customisable, reusable and flexible machine vision framework. These requirements are identified as being: ease of customisation for a particular application domain; independence from image definition; independence from shape representation scheme; ability to add new domain specific shape descriptors; independence from implemented machine vision algorithms; and the ability to maximise reuse of the generic framework. The thesis begins with a review of key machine vision functions and traditional architectures. In particular, machine vision architectures predicated on a process oriented framework are examined in detail and evaluated against the criteria stated above. An object oriented model is developed within the thesis, identifying the key classes underlying the machine vision domain. The responsibilities of these classes, and the relationships between them, are analysed in the context of high level machine vision tasks, for example object recognition. This object oriented approach is then contrasted with the more traditional process oriented approach. The object oriented model and framework is subsequently evaluated through a customisation, to illustrate an example machine vision application, namely Surface Mounted Electronic Assembly inspection. The object oriented model is also evaluated in the context of two functional machine vision applications described in literature. The model developed in this thesis incorporates the fundamental object oriented concepts of abstraction, encapsulation, inheritance and polymorphism. The results show that an object oriented approach does achieve the requirements for a generic, customisable, reusable and flexible machine vision framework.
115

Isosurface modelling of soft objects in computer graphics

McPheeters, Craig William January 1990 (has links)
There are many different modelling techniques used in computer graphics to describe a wide range of objects and phenomena. In this thesis, details of research into the isosurface modelling technique are presented. The isosurface technique is used in conjunction with more traditional modelling techniques to describe the objects needed in the different scenes of an animation. The isosurface modelling technique allows the description and animation of objects that would be extremely difficult, or impossible to describe using other methods. The objects suitable for description using isosurface modelling are soft objects. Soft objects merge elegantly with each other, pull apart, bubble, ripple and exhibit a variety of other effects. The representation was studied in three phases of a computer animation project: modelling of the objects; animation of the objects; and the production of the images. The research clarifies and presents many algorithms needed to implement the isosurface representation in an animation system. The creation of a hierarchical computer graphics animation system implementing the isosurface representation is described. The scalar fields defining the isosurfaces are represented using a scalar field description language, created as part of this research, which is automatically generated from the hierarchical description of the scene. This language has many techniques for combining and building the scalar field from a variety of components. Surface attributes of the objects are specified within the graphics system. Techniques are described which allow the handling of these attributes along with the scalar field calculation. Many animation techniques specific to the isosurface representation are presented. By the conclusion of the research, a graphics system was created which elegantly handles the isosurface representation in a wide variety of animation situations. This thesis establishes that isosurface modelling of soft objects is a powerful and useful technique which has wide application in the computer graphics community.
116

Perceptual quality driven 3-D video over networks

Hewage, Chaminda T. E. R. January 2008 (has links)
3-D video in day to day life will enhance the way we represent real-world sceneries and provide more natural conditions for human interaction. Therefore, 3-D video has the potential to be the next killer application in multimedia communications. However, the demand for resources (e.g. bandwidth), 3-D quality evaluations and providing error protection are challenges to be addressed. Thus, this thesis addresses the issues related to transmission of 3-D video over communication networks including compression, quality evaluations, error resilience and error concealment. The first part of the thesis investigates encoding approaches for 3-D video in terms of compression efficiency and adaptability to existing communication technologies. Moreover, an encoding configuration is proposed for colour plus depth video coding based on scalable video coding principals. The proposed encoding configuration shows improved compression efficiency and scalability which can be utilized to scale conventional video applications into stereoscopic video with a minimum increase to the bandwidth required. Quality evaluation issues of stereoscopic video are addressed in the second part of the thesis. The correlations between objective and subjective quality ratings are derived for the range of compression ratios and packet loss rates considered. The results show high correlation between candidate objective measures (e.g. PSNR of colour image) and the measured 3-D perceptual quality attributes. The third part of the thesis investigates efficient error resilience and concealment methods for backward compatible stereoscopic video transmission over wired/wireless networks. In order to provide enhanced error recovery, the proposed methods utilize inherent characteristics of colour plus depth video and their contributions towards improved perceived quality. The error resilience methods proposed improve 3-D perception compared to equally protected transmission of colour plus depth map video. Similarly, the proposed error concealment methods recover missing information more effectively compared to the deployment of existing 2-D error concealment methods.
117

Radio resource management for cognitive radio networks

Pirmoradian, Mahdi January 2012 (has links)
Cognitive radio concept is a promising technology to cope with the spectrum scarcity issue in the emerging wireless technology. Practical cognitive radio as an intelligent radio is on the horizon, in which the system is able to observe radio environment, understanding its situation, and adapt its transceiver parameters without disruption to the licensed service. The main given functionality of the cognitive radio is dynamic spectrum management using underlay or overlay spectrum-sharing mechanisms. This thesis studies several objectives in cognitive radio networks namely; cumulative interference in multi-user overlay networks, effective capacity optimisation in time varying imperfect fading channels, and diverse spectrum decision schemes (i.e. Maximum Entropy Channel Access, MECA, and Adaptive Spectrum Opportunity Access, ASOA, schemes) in overlay networks. Also Green Cognitive Radio concept is introduced for enhancing energy efficiency in overlay networks. The cumulative interference at a cell-edge active primary receiver is estimated based on the two scenarios, the broadcast of receiver beacon signal and the broadcast of licensed transmitter beacon signal. In the proposed system topology, the cognitive users are distributed within and outside of the licensed coverage area with constant density. The results indicate that cumulative interference significantly gets low level through the broadcast of receiver beacon signal scenario in comparison with the licensed transmitter scenario. Additionally, optimising effective capacity of a secondary user subject to the interference constraint and transmission power constraint factors, in imperfect fading channels is studied. In this case, cross channel state information is a key factor in adapting transmission power and channel capacity accordingly. The numerical results show that effective capacity is influenced upon increasing cross channel error (secondary transmitter-primary receiver link), and QoS delay items. Moreover, the study is completed by proposing power control policy upon minimising interference level at the licensed receiver subject to the desired effective capacity level and transmission power constraint. Hence, performance of the proposed spectrum decision schemes (MECA, ASOA) is examined and explained by comparison with Random Channel Access (RCA), Minimum Channel Rate (MCR), and First Opportunity Channel Access (FOCA) schemes in the period of simulation time. MECA scheme uses weighted entropy function to assess usefulness of the remaining available idle channels, and so selects appropriate spectrum opportunity for secondary data delivery. The performance reveals that MECA and ASOA can potentially be considered as viable approaches in spectrum selection schemes. Additionally, in the case of GCR aspect an opportunistic power control policy using the remaining idle channel lifetime is proposed to mitigate interference power at the primary receiver. Overall, we develop and propose a unique technique in decreasing total interference in overlay networks; effective capacity optimisation in underlay networks, feasible spectrum selection schemes, and also green cognitive radio concept in the field of dynamic spectrum access networks.
118

COMPARING BRAIN-COMPUTER INTERFACES ACROSS VARYING TECHNOLOGY ACCESS LEVELS

Dollman, Gavin John 20 August 2014 (has links)
A brain-computer interface (BCI) is a device that uses neurophysiological signals measured from the brain to activate external machinery. BCIs have traditionally been used to enhance the standard of living for severely disabled patients. This has resulted in a shortage of data on how BCIs perform with able-bodied individuals. There has recently (2012) been a trend towards BCI research involving able users but these studies are still too few to make a substantial impact. Additionally, traditional input methods are being replaced or supplemented by alternative natural modes of interaction and these natural interactions have become known as NUIs. To investigate the suitability of a BCI as a NUI, this study used the Emotiv headset to provide direct measurement of a participantâs performance while performing tasks similar to wheelchair manipulation in order to determine whether a participantâs access to traditional input methods influences their performance. Thus, the main aim of this study was to investigate the usability of an Emotiv for robot navigation. Additionally, the study aimed to discover whether a userâs performance differed when using a keyboard compared to the Emotiv as well as investigating whether there was improvement of performance in the short term for a user through repetitive use of the Emotiv. In order to compare the usability of the Emotiv to a keyboard the participants were placed into groups based on their exposure to traditional input methods. This was verified based on their individual expertise rating, which was a measure of frequency and length of use. The test instrument used consisted of a written program that navigated a pair of Mindstorm NXT robots across a custom designed test course. Data was collected via usability testing which measured learnability, efficiency and effectiveness. Efficiency was measured as the time taken to complete a task while effectiveness was a measure of the errors made by a participant when completing a task. Results indicated that there was no significant difference between the groupsâ efficiency and effectiveness when using the Emotiv to complete a task. Thus, a userâs previous experience with a traditional input method does not influence a userâs performance with an Emotiv when navigating a robot. This result indicates that the interface is intuitive to use and, therefore the Emotiv could be suitable as a NUI. The results for the usability metrics efficiency and effectiveness indicated that there was a significant difference between the performances with the Emotiv and a keyboard. The results show that, with the Emotiv, participants took more time to complete a task and made more errors when compared to a keyboard. This discrepancy was attributed to cognitive theory as it is believed that the participants violated their preformed schema which affected their performance. However, the participants quickly became comfortable with the Emotiv which supports the evidence that the interface is intuitive to use. For neither the usability metrics efficiency nor effectiveness was a significant improvement detected with repetitive use of the Emotiv. Thus, repetitive use of the Emotiv to navigate a robot does not improve a userâs performance over a short period of time. These results indicate that in terms of efficiency and effectiveness the keyboard is the superior interface. The results also revealed that a participantâs performance is not affected by their exposure to traditional input methods when utilising a BCI. Thus, the Emotiv is intuitive to use and appears suitable for use as a NUI. This study proved that the Emotiv is an intuitive interface and can be used with little to no previous experience.
119

COMPARING THE SENSOR GLOVE AND QUESTIONNAIRE AS MEASURES OF COMPUTER ANXIETY

Nkalai, Tlholohelo Stephania 21 August 2014 (has links)
A vast amount of literature regarding computer anxiety exists. Consequently, a number of researchers have discovered different definitions for computer anxiety. Regardless of the numerous definitions, several researchers agree that computer anxiety involves emotional âfearâ or âapprehensionâ when interacting or anticipating interaction with computers. Subsequently, some individuals who experience computer anxiety avoid using computers. This situation is undesirable because these days it is almost always a necessity for people to use computers in the workplace. It is therefore important to extensively investigate computer anxiety including measures which can be implemented to mitigate it. Different findings about computer anxiety regarding the correlates: gender, age, computer ownership, educational attainment and computer experience, exist. For example, while some research findings state that females experience higher levels of computer anxiety than males, other research findings assert that males experience computer anxiety more than the females. The contradictory findings regarding the correlates of computer anxiety could be attributed to the fact that most of the research studies which investigated computer anxiety relied solely on existing computer anxiety questionnaires. Using questionnaires exclusively poses various limitations which include relying on the âsubjectiveâ responses of the participants. This research study incorporated another measurement of computer anxiety in addition to an existing computer anxiety questionnaire named Computer Anxiety Rating Scale. This additional measurement was performed using an instrument that measured physiological signals of a participant. The instrument is called an Emotion RECecognition system (EREC). It measures skin temperature and skin resistance and heart rate. Apart from the mentioned two, other data collection methods were used which are pre-test and post- test self-developed questionnaires, observations and interviews. With various measurements incorporated in this study, computer anxiety was investigated taking into consideration the following research questions: ï· To what extent does a sensor glove add value in measuring computer anxiety during usability testing when compared to anxiety questionnaires and observations? ï· To what extent is computer anxiety influenced by age, gender, computer experience, educational attainment, and ownership of a personal computer according to the anxiety questionnaire and the sensor glove? From the findings of the study in relation to the first research question, it can be concluded that the sensor glove does not add value. Instead, the sensor glove may add value when measuring stress. This means that although the EREC sensor glove measures skin conductance, changes in skin conductance may indicate changes in stress levels rather than anxiety levels. Regarding the second research question, it can be concluded that computer anxiety was not influenced by age, gender, computer experience, educational attainment, and ownership of a personal computer according to the anxiety questionnaire and the sensor glove.
120

Evaluation of alternative discrete-event simulation experimental methods

Warn, Alan James January 2003 (has links)
The aim of the research was to assist non-experts produce meaningful, non-terminating discrete event simulations studies. The exemplar used was manufacturing applications, in particular sequential production lines. The thesis addressed the selection of methods for introducing randomness, setting the length of individual simulation runs, and determining the conditions for starting measurements". Received wisdom" in these aspects of simulation experimentation was not accepted.The research made use of a Markov Chain queuing model and statistica analysis of exhaustive computer-based experimentation using test models. A specific production-line model drawn from the motor industry was used as a point of reference. A distinctive,quality control like, process of facilitating the controlled introduction of "representative randomness" from a pseudo random-number generator was developed, rather than relying on a generator's a priori performance in standard statistical tests of randomness. This approach proved to be effective and practical. Other results included: The distortion in measurements due to the initial conditions of a simulation run of a queue was only corrected by a lengthy run and not by discarding early results. Simulation experiments of the same queue, demonstrated that a single long run gave greater accuracy than having multiple runs. The choice of random number generator is less important than the choice of seed. Notably, RANDU (a "discredited"MLCG) with careful seed selection was able to outperform in tests both real random numbers, and other MLCGs if their seed were chosen randomly,99.8% of the time. Similar results were obtained for Mersenne Twister and Descriptive Sampling.Descriptive Samnpling was found to provide the best samples and was less susceptible to errorsin the forecast of the required sample size. A method of determining the run length of the simulation that would ensure the run was representative of the true condifions was proposed. An interactive computer program was created to assist in the calculation of the run length of a simulation and determine seeds so as to obtain" highly representative" samples, demonstrating the facility required in simulation software to support theses elected methods.

Page generated in 0.1354 seconds