• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 34
  • 22
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 343
  • 343
  • 152
  • 130
  • 63
  • 58
  • 54
  • 50
  • 50
  • 41
  • 35
  • 35
  • 33
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

The use of computers among secondary school educators in the Western Cape Central Metropole

Naicker, Visvanathan January 2010 (has links)
The use of computers in the classroom could allow both educators and learners to achieve new capabilities. There are underlying factors, however, that are obstructing the adoption rate of computer use for instructional purposes in schools. The study focused on these problems with a view to determining which critical success factors promote a higher adoption rate of computer usage in education. This study derived its theoretical framework from various technology adoption and educational models Methodology: The nature of the study required a . Furthermore, it investigated ways in which computer technology could enhance learning. mixed methods approach to be employed, making use of both quantitative and qualitative data. Two questionnaires, one for the educators and one for the principals of the schools were hand-delivered to 60 secondary schools. Exploratory factor analysis and various internal consistency measures were used to assess and analyse the data.Conclusion: Educationists and policy-makers must include all principals and educators when technological innovations are introduced into schools. All these role-players need to be cognisant of the implications if innovations are not appropriately implemented. Including the use of computers in educator training programs is important so that pre-service educators can see the benefits of using the computer in their own teaching.
152

Face Recognition: Study and Comparison of PCA and EBGM Algorithms

Katadound, Sachin 01 January 2004 (has links)
Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented. The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques.
153

Computational Medical Image Analysis : With a Focus on Real-Time fMRI and Non-Parametric Statistics

Eklund, Anders January 2012 (has links)
Functional magnetic resonance imaging (fMRI) is a prime example of multi-disciplinary research. Without the beautiful physics of MRI, there wouldnot be any images to look at in the first place. To obtain images of goodquality, it is necessary to fully understand the concepts of the frequencydomain. The analysis of fMRI data requires understanding of signal pro-cessing, statistics and knowledge about the anatomy and function of thehuman brain. The resulting brain activity maps are used by physicians,neurologists, psychologists and behaviourists, in order to plan surgery andto increase their understanding of how the brain works. This thesis presents methods for real-time fMRI and non-parametric fMRIanalysis. Real-time fMRI places high demands on the signal processing,as all the calculations have to be made in real-time in complex situations.Real-time fMRI can, for example, be used for interactive brain mapping.Another possibility is to change the stimulus that is given to the subject, inreal-time, such that the brain and the computer can work together to solvea given task, yielding a brain computer interface (BCI). Non-parametricfMRI analysis, for example, concerns the problem of calculating signifi-cance thresholds and p-values for test statistics without a parametric nulldistribution. Two BCIs are presented in this thesis. In the first BCI, the subject wasable to balance a virtual inverted pendulum by thinking of activating theleft or right hand or resting. In the second BCI, the subject in the MRscanner was able to communicate with a person outside the MR scanner,through a virtual keyboard. A graphics processing unit (GPU) implementation of a random permuta-tion test for single subject fMRI analysis is also presented. The randompermutation test is used to calculate significance thresholds and p-values forfMRI analysis by canonical correlation analysis (CCA), and to investigatethe correctness of standard parametric approaches. The random permuta-tion test was verified by using 10 000 noise datasets and 1484 resting statefMRI datasets. The random permutation test is also used for a non-localCCA approach to fMRI analysis.
154

Real Time Driver Safety System

Cho, Gyuchoon 01 May 2009 (has links)
The technology for driver safety has been developed in many fields such as airbag system, Anti-lock Braking System or ABS, ultrasonic warning system, and others. Recently, some of the automobile companies have introduced a new feature of driver safety systems. This new system is to make the car slower if it finds a driver’s drowsy eyes. For instance, Toyota Motor Corporation announced that it has given its pre-crash safety system the ability to determine whether a driver’s eyes are properly open with an eye monitor. This paper is focusing on finding a driver’s drowsy eyes by using face detection technology. The human face is a dynamic object and has a high degree of variability; that is why face detection is considered a difficult problem in computer vision. Even with the difficulty of this problem, scientists and computer programmers have developed and improved the face detection technologies. This paper also introduces some algorithms to find faces or eyes and compares algorithm’s characteristics. Once we find a face in a sequence of images, the matter is to find drowsy eyes in the driver safety system. This system can slow a car or alert the user not to sleep; that is the purpose of the pre-crash safety system. This paper introduces the VeriLook SDK, which is used for finding a driver’s face in the real time driver safety system. With several experiments, this paper also introduces a new way to find drowsy eyes by AOI,Area of Interest. This algorithm improves the speed of finding drowsy eyes and the consumption of memory use without using any object classification methods or matching eye templates. Moreover, this system has a higher accuracy of classification than others.
155

Principal design criteria influencing the performance of a portable, high performance parallel I/O implementation

Rajaram, Kumaran. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science. / Title from title screen. Includes bibliographical references.
156

The potential benefits of multi-modal social interaction on the web for senior users

Singh, Anjeli. Gilbert, Juan E. January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Includes bibliographic references (p.22-23).
157

Adapting Remote Direct Memory Access based file system to parallel Input-/Output

Velusamy, Vijay. January 2003 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
158

Transparent process migration for parallel Java computing /

Ma, Ka-kui. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references (leaves 63-65).
159

HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

Zhang, Qing 01 January 2015 (has links)
Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people.
160

Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

Fu, Bo 01 January 2015 (has links)
Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image.

Page generated in 0.1279 seconds