• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • Tagged with
  • 133
  • 133
  • 133
  • 122
  • 51
  • 22
  • 19
  • 19
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Shark Sim: A Procedural Method of Animating Leopard Sharks Based on Raw Location Data

Blizard, Katherine S 01 June 2013 (has links)
Fish such as the Leopard Shark (Triakis semifasciata) can be tagged on their fin, released back into the wild, and their location tracked though technologies such as autonomous robots. Timestamped location data about their target is stored. We present a way to procedurally generate an animated simulation of T. semifasciata using only these timestamped location points. This simulation utilizes several components. Input timestamps dictate a monotonic time-space curve mapping the simulation clock to the space curve. The space curve connects all the location points as a spline without any sharp folds that are too implausible for shark traversal. We create a model leopard shark that has convincing kinematics that respond to the space curve. This is achieved through acquiring a skinned model and applying T. semifasciata motion kinematics that respond to velocity and turn commands. These kinematics affect the spine and all fins that control locomotion and direction. Kinematic- based procedural keyframes added onto a queue interpolate while the shark model traverses the path. This simulation tool generates animation sequences that can be viewed in real-time. A user study of 27 individuals was deployed to measure the perceived realism of the sequences as judged by the user by contrasting 5 different film sequences. Results of the study show that on average, viewers perceive our simulation as more realistic than not.
32

Indigenous language usage in a digital library: He hautoa kia ora tonu ai.

Keegan, Te Taka Adrian Gregory January 2007 (has links)
The research described in this thesis examines indigenous language usage in a digital library environment that has been accessed via the Internet. By examining discretionary use of the Māori Niupepa and Hawaiian Nūpepa digital libraries this research investigates how indigenous languages were used in these electronic environments in 2005. The results provide encouragement and optimism to people who are striving to retain, revitalise and develop the use of indigenous languages in information technologies. The Transaction Log Analysis (TLA) methods used in this research serve as an example of how web logs can be used to provide significant information about language usage in a bilingual online information system. Combining the TLA with user feedback has provided insights into how and why clients use indigenous languages in their information retrieval activities. These insights in turn, show good practice that is relevant not only to those working with indigenous languages, indigenous peoples or multilingual environments, but to all information technology designers who strive for universal usability. This thesis begins by describing the importance of using indigenous languages in electronic environments and suggests that digital libraries can provide an environment to support and encourage the use of such languages. TLA is explained in the context of this study and is then used to analyse aspects of te reo Māori usage in the Niupepa digital library environment in 2005. TLA also indicates that te reo Māori was used by international clients and this usage differed to te reo Māori usage by national (Aotearoa) clients. Findings further reveal that the default language setting of the Niupepa digital library had a considerable impact on te reo Māori usage. When the default language was set to te reo Māori not only were there more requests in te reo Māori but there was also a higher usage of te reo Māori in the information retrieval activities. TLA of the Hawaiian Nūpepa digital library indicated that the Hawaiian language was also used in a digital library. These results confirm that indigenous languages were used in digital library environments. Feedback from clients suggests reasons why indigenous languages were used in this environment. These reasons include the indigenous language content of the digital library, the indigenous language default language setting of the digital library and a stated desire by the clients to use the indigenous language. The key findings raise some interface design issues and support the claim that digital libraries can provide an environment to support the use of indigenous languages.
33

Face Recognition: Study and Comparison of PCA and EBGM Algorithms

Katadound, Sachin 01 January 2004 (has links)
Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented. The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques.
34

Real Time Driver Safety System

Cho, Gyuchoon 01 May 2009 (has links)
The technology for driver safety has been developed in many fields such as airbag system, Anti-lock Braking System or ABS, ultrasonic warning system, and others. Recently, some of the automobile companies have introduced a new feature of driver safety systems. This new system is to make the car slower if it finds a driver’s drowsy eyes. For instance, Toyota Motor Corporation announced that it has given its pre-crash safety system the ability to determine whether a driver’s eyes are properly open with an eye monitor. This paper is focusing on finding a driver’s drowsy eyes by using face detection technology. The human face is a dynamic object and has a high degree of variability; that is why face detection is considered a difficult problem in computer vision. Even with the difficulty of this problem, scientists and computer programmers have developed and improved the face detection technologies. This paper also introduces some algorithms to find faces or eyes and compares algorithm’s characteristics. Once we find a face in a sequence of images, the matter is to find drowsy eyes in the driver safety system. This system can slow a car or alert the user not to sleep; that is the purpose of the pre-crash safety system. This paper introduces the VeriLook SDK, which is used for finding a driver’s face in the real time driver safety system. With several experiments, this paper also introduces a new way to find drowsy eyes by AOI,Area of Interest. This algorithm improves the speed of finding drowsy eyes and the consumption of memory use without using any object classification methods or matching eye templates. Moreover, this system has a higher accuracy of classification than others.
35

HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

Zhang, Qing 01 January 2015 (has links)
Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people.
36

Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

Fu, Bo 01 January 2015 (has links)
Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image.
37

Visualizing and Predicting the Effects of Rheumatoid Arthritis on Hands

Mihail, Radu P 01 January 2014 (has links)
This dissertation was inspired by difficult decisions patients of chronic diseases have to make about about treatment options in light of uncertainty. We look at rheumatoid arthritis (RA), a chronic, autoimmune disease that primarily affects the synovial joints of the hands and causes pain and deformities. In this work, we focus on several parts of a computer-based decision tool that patients can interact with using gestures, ask questions about the disease, and visualize possible futures. We propose a hand gesture based interaction method that is easily setup in a doctor's office and can be trained using a custom set of gestures that are least painful. Our system is versatile and can be used for operations like simple selections to navigating a 3D world. We propose a point distribution model (PDM) that is capable of modeling hand deformities that occur due to RA and a generalized fitting method for use on radiographs of hands. Using our shape model, we show novel visualization of disease progression. Using expertly staged radiographs, we propose a novel distance metric learning and embedding technique that can be used to automatically stage an unlabeled radiograph. Given a large set of expertly labeled radiographs, our data-driven approach can be used to extract different modes of deformation specific to a disease.
38

Assessment and support of the idea co-construction process that influences collaboration

Gweon, Gahgene 01 April 2012 (has links)
Research in team science suggests strategies for addressing difficulties that groups face when working together. This dissertation examines how student teams work in project based learning (PBL) environments, with the goal of creating strategies and technology to improve collaboration. The challenge of working in such a group is that the members frequently come from different backgrounds and thus have different ideas on how to accomplish a project. In these groups, teamwork and production of successful solutions depends on whether members consider each other’s dissimilar perspectives. However, the lack of a shared history means that members may have difficulty in taking the time to share and build knowledge collectively. The ultimate goal of my research is to design strategies and technology to improve the inner workings of PBL groups so that they will learn from each other and produce successful outcomes in collaborative settings. The field of computer supported collaborative learning has made much progress on designing, implementing, and evaluating environments that support project based learning. However, most existing research concerns students rather than instructors. Therefore, in my initial research, I explore the needs of the instructors in conducting student assessments (studies one, two). These studies identify five different group processes that are of importance from the instructors’ perspective. My subsequent research focuses on one of them, namely the process of knowledge co-construction, which is a process that instructors have significant difficulty in assessing. In order to support the assessment of the knowledge co-construction process, my research has progressed along two axes: (a) identifying conditions that support the knowledge co-construction process and its relationship to learning and knowledge transfer (studies three, four, and five), and (b) automatically monitoring the knowledge co-construction process using natural language processing and machine learning (studies six ~ nine). Studies five and eight look at a specific type of knowledge co-construction process called the idea co-construction process (ICC). ICC is the process of taking up, transforming, or otherwise building on an idea expressed earlier in a conversation. I argue that ICC is essential for groups to function well in terms of knowledge sharing and perspective taking.
39

Improving Understanding and Trust with Intelligibility in Context-Aware Applications

Lim, Brian Y. 01 May 2012 (has links)
To facilitate everyday activities, context-aware applications use sensors to detect what is happening and use increasingly complex mechanisms (e.g., by using big rule-sets or machine learning) to infer the user's context and intent. For example, a mobile application can recognize that the user is in a conversation and suppress any incoming calls. When the application works well, this implicit sensing and complex inference remain invisible. However, when it behaves inappropriately or unexpectedly, users may not understand its behavior. This can lead users to mistrust, misuse, or even abandon it. To counter this lack of understanding and loss of trust, context-aware applications should be intelligible, capable of explaining their behavior. We investigate providing intelligibility in context-aware applications and evaluate its usefulness to improve user understanding and trust in context-aware applications. Specifically, this thesis supports intelligibility in context-aware applications through the provision of explanations that answer different question types, such as: Why did it do X? Why did it not do Y? What if I did W, What will it do? How can I get the application to do Y? This thesis takes a three-pronged approach to investigating intelligibility by (i) eliciting the user requirements for intelligibility, to identify what explanation types end-users are interested in asking context-aware applications, (ii) supporting the development of intelligible context-aware applications with a software toolkit and the design of these applications with design and usability recommendations, and (iii) evaluating the impact of intelligibility on user understanding and trust under various situations and application reliability, and measuring how users use an interactive intelligible prototype. We show that users are willing to use well-designed intelligibility features, and this can improve user understanding and trust in the adaptive behavior of context-aware applications.
40

Improving Immersive Reality Workflows and the Harvey Mudd Clinic Process

Mitchell, Holly 01 January 2018 (has links)
This paper summarizes an experience with Harvey Mudd Clinic developing a plugin for Unity that allows users to more easily reduce the polygon count and thereby load time of a model in an AR/VR experience. The project focused on UI design and flexible code architecture.

Page generated in 0.1146 seconds