• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 4
  • 2
  • 1
  • Tagged with
  • 30
  • 30
  • 10
  • 9
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Full-body Shell Creation for CAD Virtual Humans including Tightly-Spaced, Enclosed Shells

Htet, Aung Thu 18 December 2017 (has links)
Computational human models have become essential in several different biomedical and electrical engineering research areas. They enable scientists to study, model, and solve complex problems of human body responses to various external stimuli including electromagnetic and radio-frequency signals. This study describes the algorithms and procedures of creating multi-tissue full-body Computer-Aided Design (CAD) human models. An emphasis is made on full-body shells of variable thicknesses, e.g. skin, fat, and average body container shells. Such shells, along with internal organs, are useful for multiple high- and low-frequency simulations in a variety of applications. Along with the creation of full-body models, an automatic algorithm to selectively decimate the meshes based on average surface curvature is developed. The algorithm will significantly reduce model size while keeping the same interpolation accuracy.
2

An investigation on the framework of dressing virtual humans

Yu, Hui January 2010 (has links)
Realistic human models are widely used in variety of applications. Much research has been carried out on improving realism of virtual humans from various aspects, such as body shapes, hair, and facial expressions and so on. In most occasions, these virtual humans need to wear garments. However, it is time-consuming and tedious to dress a human model using current software packages [Maya2004]. Several methods for dressing virtual humans have been proposed recently [Bourguignon2001, Turquin2004, Turquin2007 and Wang2003B]. The method proposed by Bourguignon et al [Bourguignon2001] can only generate 3D garment contour instead of 3D surface. The method presented by Turquin et al. [Turquin2004, Turquin2007] could generate various kinds of garments from sketches but their garments followed the shape of the body and the side of a garment looked not convincing because of using simple linear interpolation. The method proposed by Wang et al. [Wang2003B] lacked interactivity from users, so users had very limited control on the garment shape.This thesis proposes a framework for dressing virtual humans to obtain convincing dressing results, which overcomes problems existing in previous papers mentioned above by using nonlinear interpolation, level set-based shape modification, feature constraints and so on. Human models used in this thesis are reconstructed from real human body data obtained using a body scanning system. Semantic information is then extracted from human models to assist in generation of 3 dimensional (3D) garments. The proposed framework allows users to dress virtual humans using garment patterns and sketches. The proposed dressing method is based on semantic virtual humans. A semantic human model is a human body with semantic information represented by certain of structure and body features. The semantic human body is reconstructed from body scanned data from a real human body. After segmenting the human model into six parts some key features are extracted. These key features are used as constraints for garment construction.Simple 3D garment patterns are generated using the techniques of sweep and offset. To dress a virtual human, users just choose a garment pattern, which is put on the human body at the default position with a default size automatically. Users are allowed to change simple parameters to specify some sizes of a garment by sketching the desired position on the human body.To enable users to dress virtual humans by their own design styles in an intuitive way, this thesis proposes an approach for garment generation from user-drawn sketches. Users can directly draw sketches around reconstructed human bodies and then generates 3D garments based on user-drawn strokes. Some techniques for generating 3D garments and dressing virtual humans are proposed. The specific focus of the research lies in generation of 3D geometric garments, garment shape modification, local shape modification, garment surface processing and decoration creation. A sketch-based interface has been developed allowing users to draw garment contour representing the front-view shape of a garment, and the system can generate a 3D geometric garment surface accordingly. To improve realism of a garment surface, this thesis presents three methods as follows. Firstly, the procedure of garment vertices generation takes key body features as constraints. Secondly, an optimisation algorithm is carried out after generation of garment vertices to optimise positions of garment vertices. Finally, some mesh processing schemes are applied to further process the garment surface. Then, an elaborate 3D geometric garment surface can be obtained through this series of processing. Finally, this thesis proposes some modification and editing methods. The user-drawn sketches are processed into spline curves, which allow users to modify the existing garment shape by dragging the control points into desired positions. This makes it easy for users to obtain a more satisfactory garment shape compared with the existing one. Three decoration tools including a 3D pen, a brush and an embroidery tool, are provided letting users decorate the garment surface by adding some small 3D details such as brand names, symbols and so on. The prototype of the framework is developed using Microsoft Visual Studio C++,OpenGL and GPU programming.
3

Development of the VHP-Female CAD model including Dynamic Breathing Sequence

Tran, Anh Le 26 April 2017 (has links)
Mathematics, physics, biology, and computer science are combined to create computational modeling, which studies the behaviors and reactions of complex biomedical problems. Modern biomedical research relies significantly on realistic computational human models or “virtual humans�. Relevant study areas utilizing computational human models include electromagnetics, solid mechanics, fluid dynamics, optics, ultrasound propagation, thermal propagation, and automotive safety research. These and other applications provide ample justification for the realization of the Visible Human Project® (VHP)-Female v. 4.0, a new platform-independent full body electromagnetic computational model. Along with the VHP-Female v. 4.0, a realistic and anatomically justified Dynamic Breathing Sequence is developed. The creation of such model is essential to the development of biomedical devices and procedures that are affected by the dynamics of human breathing, such as Magnetic Resonance Imaging and the calculation of Specific Absorption Rate. The model can be used in numerous application, including Breath-Detection Radar for human search and rescue.
4

The Effect of Story Narrative in Multimedia Learning

January 2018 (has links)
abstract: ELearning, distance learning, has been a fast-developing topic in educational area. In 1999, Mayer put forward “Cognitive Theory of Multimedia learning” (Moreno, & Mayer, 1999). The theory consisted of several principles. One of the principles, Modality Principle describes that when learners are presented with spoken words, their performance are better than that with on-screen texts (Mayer, R., Dow, & Mayer, S. 2003; Moreno, & Mayer, 1999).It gave an implication that learners performance can be affected by modality of learning materials. A very common tool in education in literature and language is narrative. This way of storytelling has received success in practical use. The advantages of using narrative includes (a) inherent format advantage such as simple structure and familiar language and ideas, (b) motivating learners, (c) facilitate listening, (d) oral ability and (e)provide schema for comparison in comprehension. Although this storytelling method has been widely used in literature, language and even moral education, few studies focused it on science and technology area. The study aims to test the effect of narrative effect in multimedia setting with science topic. A script-based story was applied. The multimedia settings include a virtual human with synthetic speech, and animation on a solar cell lesson. The experiment design is a randomized alternative- treatments design, in which participants are requested to watch a video with pedagogical agent in story format or not. Participants were collected from Amazon Mechanical Turk. Result of transfer score and retention score showed that no significant difference between narrative and non-narrative condition. Discussion was put forward for future study. / Dissertation/Thesis / Masters Thesis Engineering 2018
5

Virtual Human Hand: Grasping Strategy and Simulation

Peña Pitarch, Esteban 25 January 2008 (has links)
La mano humana es una herramienta muy completa, capaz de adaptarse a diferentes superficies y formas, y también tocar y coger. Es una conexión directa entre el mundo exterior y el cerebro. I.Kant (filosofo alemán) definió la mano como una extensión del cerebro.En esta tesis, nosotros hemos construido una mano virtual para simular la mano humana lo más realísticamente posible. Basado en la anatomía de la mano, hemos diseñado una mano con 25 grados de libertad (DOF), con cuatro de esos grados de libertad localizados en la unión carpometacarpal, para el dedo anular y el meñique. Estos cuatro grados de libertad permiten la simulación de la mano humana cuando esta se arquea. El dedo gordo ha sido diseñado con 5 DOF, los dedos, índice y medio tienen 4 DOF, la unión metacarpofalangeal tiene dos, y las uniones interfalangeales próxima y distal tienen uno cada una. Para los dedos anular y meñique, los 4 DOF tienen las mismas uniones más los cuatro descritos arriba.El método de Denavit-Hartenberg (D-H) fue aplicado, debido a que cada dedo fue considerado como un rayo, esto es, una cadena cinemática abierta, con las uniones consideradas "revolutas". Las tablas D-H para cada dedo fueron mostradas y la aplicación de la cinemática directa e inversa permitió calcular todos los ángulos para cada unión [q1 . . . q25]T .Antes de coger cualquier objeto, nuestro sistema comprueba si el objeto esta en el espacio de la mano, mediante el análisis del espacio de trabajo.Se ha implementado un algoritmo semi-inteligente orientado a las tareas para las cuales el objeto ha sido diseñado, con el fin de tomar una decisión, una vez el usuario ha escogido el objeto y su tarea inherente. El algoritmo para coger ha sido implementado en un escenario virtual. / The human hand is the most complete tool, able to adapt to different surfaces and shapes and to touch and grasp. It is a direct connection between the exterior world and the brain. I. Kant (German philosopher) defined how the hand is an extension of the brain.In this dissertation, we built a virtual human hand to simulate the human hand as realistically as possible. Based on the anatomy of the hand, we designed a hand with 25 degrees of freedom (DOF), with four of these degrees located in the carpometacarpal joint for the ring and small fingers. These four degrees permit the simulation of the human hand when it is arched. The thumb was designed with 5 DOF, the index and middle fingers have 4 DOF, in the metacarpophalangeal joint has two, and in the proximal interphalangeal joint and in the distal interphalangeal joint each have one. For the ring and small fingers, the 4 DOF are in similar joints plus as the four described above.The Denavit-Hartenberg (D-H) method was applied because each finger was considered a ray, i.e., an open chain, with joints approximated to revolute joints. The D-H tables for each finger were shown, and the application of forward and inverse kinematics permit the calculation of all angles for each joint [q1 . . . q25]T .Before grasping any object, our system checks the reachability of the object with workspace analysis.Semi-intelligent task-oriented object grasping was implemented for making a decision once the user chooses the object and the task inherent to the object. The grasping algorithm was implemented in a virtual environment.
6

Optimizing a Virtual Human Platform for Depression/Suicide Ideation Identification for the American Soldier

Monahan, Christina M 01 December 2021 (has links) (PDF)
Suicide surpassed homicide to be the second leading cause of death among people 10-24 years old in the United States \cite{1}. This statistic is alarming especially when combined with the more than eight distinctly different types of clinical depression among society today \cite{2}. To further complicate this health crisis, let’s consider the current worldwide isolating pandemic often referred to as COVID-19 that has spanned 12 months. It is more important than ever to consider how we can get ahead of the crisis by identifying the symptoms as they set in and more importantly ahead of the decision to commit suicide. To capitalize on the modern shift to electronic-based interactions \cite{1}, the use of Artificial Intelligence (AI) and Machine Learning (ML) methods to aid in identification have been previously implemented in Virtual Human interviewing platforms. This effort examines these existing approaches and includes an independent survey that is used to solve the gap in early identification of depression and suicidal ideation using a virtual human interviewing platform by soliciting honest, open, and current feedback from Soldiers on how to optimize such a system to encourage its use in the future. Specifically, the analysis of the survey results identify critical gaps from a participants perspective to be security, customization's, and error handling recommended to be included in future development of the EMPOWER (Enhancing Mental Performance and Optimizing Warfighter Effectiveness and Resilience: From MultiSense to OmniSense) platform. These recommendations are provided to the USC-ICT EMPOWER team to be included in the next prototype and system test.
7

Investigation of an emotional virtual human modelling method

Zhao, Yue January 2008 (has links)
In order to simulate virtual humans more realistically and enable them life-like behaviours, several exploration research on emotion calculation, synthetic perception, and decision making process have been discussed. A series of sub-modules have been designed and simulation results have been presented with discussion. A visual based synthetic perception system has been proposed in this thesis, which allows virtual humans to detect the surrounding virtual environment through a collision-based synthetic vision system. It enables autonomous virtual humans to change their emotion states according to stimuli in real time. The synthetic perception system also allows virtual humans to remember limited information within their own First-in-first-out short-term virtual memory. The new emotion generation method includes a novel hierarchical emotion structure and a group of emotion calculation equations, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. The work introduces a dynamic emotional motion database structure for virtual human modelling. When developing realistic virtual human behaviours, lots of subjects were motion-captured whilst performing emotional motions with or without intent. The captured motions were endowed to virtual characters and implemented in different virtual scenarios to help evoke and verify design ideas, possible consequences of simulation (such as fire evacuation). This work also introduced simple heuristics theory into decision making process in order to make the virtual human’s decision making more like real human. Emotion values are proposed as a group of the key cues for decision making under the simple heuristic structures. A data interface which connects the emotion calculation and the decision making structure together has also been designed for the simulation system.
8

Full Body Interaction : Toward an integration of Individual Differences / Interaction Corps Entier : Vers une intégration des différences interindividuelles

Giraud, Tom 26 March 2015 (has links)
Les humains virtuels en interaction homme machine sont aujourd’hui établis comme un objet de recherche à part entière. Leurs comportements pensés pour différentes applications sont basés sur les routines d’interaction entre humains. Bien que les humains virtuels aient souvent un corps représenté graphiquement, les recherches actuelles souffrent de deux limitations majeures qui dégradent la crédibilité de l’expérience des utilisateurs : les comportements corporels modélisés manquent d’interactivité sociale et ne prennent pas en compte les différences interindividuelles. Les développements récents en Sciences Humaines promeuvent une approche plus intégrative avec en son cœur le rôle constitutif de l’interaction sociale. La position centrale et interdisciplinaire des humains virtuels dans ce nouvel agenda de recherche est particulièrement importante : ils sont à la fois une manière de mieux étudier les différents phénomènes d’interactions sociales (comme outil expérimental) et une solution potentielle à des défis sociétaux (comme applications). L’objectif principal de cette thèse est de combiner les apports et valoriser les synergies établies entre les sciences de l’informatique et les sciences humaines, afin d’appréhender le rôle modérateur des différences interindividuelles dans le contrôle et la régulation des interactions corporelles. L’objectif à plus long terme de cette contribution vise à développer des humains virtuels interactifs à l’interface de ces domaines, avec l’idée que les besoins des deux champs de recherches contraindraient de manière positive les propositions futures. Pour limiter le cadre de cette thèse, nous nous sommes concentrés sur les mouvements du corps (les aspects statiques du corps ou les autres modalités ne sont pas considérées), sur les mécanismes de couplages bas niveaux et le rôle modérateur des différences interindividuelles avec comme objectif de proposer des prototypes d’humains virtuels comme preuve de concept (plutôt que des logiciels fonctionnels complets) incluant des modèles d’interaction corporelle dyadique. Notre méthodologie peut être résumée en quatre étapes principales. Premièrement, les modèles et hypothèses liant les processus d’interaction sociale aux différences interindividuelles résultent d’une double revue de littérature en sciences de l’informatique et sciences humaines. Ces différences interindividuelles identifiées comme pertinentes apparaissant faiblement associées théoriquement, notre seconde étape consista à étudier leurs associations lors d’une étude à grande échelle. Troisièmement, les interactions corporelles ont été analysées dans deux études de cas qui présentent des intérêts applicatifs et expérimentaux. Dans les deux cas, des corpus multimodaux d’interactions corporelles dyadiques ont été collectés et associés à des mesures de différences interindividuelles. La phase finale fut de développer des prototypes d’humains virtuels inspirés par les analyses précédentes et basés sur les données collectées. Le modèle général portant sur la prise en compte des différences interindividuelles se révèle en accord avec les données collectées (questionnaires d’auto-évaluation) : les relations entre dispositions d’orientations pro-sociales, d’empathie et de régulation émotionnelle furent confirmées. Les deux études de cas validèrent partiellement les hypothèses initiales : certaines différences interindividuelles modulèrent les processus d’interaction corporelle. Ces études contribuent à la définition de modèles d’humains virtuels interactifs parcimonieux. La principale contribution critique de ces deux études de cas au rôle modérateur des différences interindividuelles dans le modèle proposé est la nécessité de prendre en considération le contexte de la tâche avant de définir les hypothèses. L’intégration de ces différences interindividuelles identifiées dans les études de cas aux modèles informatiques interactifs est incluse dans les directions de recherches futures. / In human computer interaction, virtual humans are now established as a specific object of research. They build on human to human interaction routines to serve various application goals. Although Virtual Humans (VH) have bodies, current researches suffer from two major limitations which impair the experienced credibility: modeled bodily behaviors lack of social interactivity and do not account for individual differences. Recent developments in human sciences call for a more integrative approach with at its heart the constitutive role of social interaction. Of particular importance is the central and interdisciplinary position of virtual humans in this new research agenda: they are both a way to better investigate the various socially interactive phenomena (VH as experimental tools) and potential solutions for societal challenges (VH as applications). The main goal of this PhD thesis is to contribute to both computer and human sciences by studying together bodily interaction and individual differences. Central to this study is the long term objective to develop interactive virtual humans at the interface of these domains, with the idea that requirements from both fields would constrain positively future propositions. To limit the scope of the thesis, we focused on body movements (not considering static bodily aspects or other modalities), low level coupling mechanisms and the moderating role of individual differences with the aim to propose proof of concept of virtual human prototypes (rather than complete functional software) embedding full body dyadic interaction models. Our research methodology can be summarized in four main steps. First, models and hypotheses linking social interaction processes and individual differences emerged from a review of the literature in both computer and human sciences. As the identified relevant individual differences appeared barely theoretically associated, our second step aimed at investigating their interrelatedness in a large scale study. Thirdly, bodily interactions were analyzed in two case studies which present application and experimental interests. In both cases, corpuses were collected with full body interacting dyads and individual differences measured. The final phase was to develop virtual human prototypes inspired by previous analyses and based on the collected data. The proposed general model of individual differences was shown to be consistent with real word data (collected by self-report questionnaires): dispositions in pro-social orientations, empathy and emotion regulation were closely related. The two case studies partially confirmed our initial hypotheses: various individual differences modulated the bodily interactive processes. These studies enabled the definition of parsimonious virtual human interactive models. The main critical contribution of the two case studies to the proposed model of individual differences is the clear necessity to take into consideration the task context before drawing any hypotheses. Future directions of research are proposed including an integration of individual differences identified in case studies.
9

Animação de humanos virtuais aplicada para língua brasileira de sinais / Virtual human animation applied in brazilian sign language

Schneider, Andréia Rodrigues de Assunção January 2008 (has links)
Os surdos possuem a capacidade de utilizar a língua oral para se comunicar limitada e por isso tem como língua materna as línguas gestuais. Isso dificulta a utilização, de maneira satisfatória, dos serviços básicos, bem como a inserção na sociedade ouvinte, que é composta pela maioria da população. Devido ao fato desta língua ser gestual, é viável afirmar que se pode simular seus sinais através de animação de humanos virtuais, sem perder a percepção correta do significado do mesmo (que palavra o sinal representa). O presente trabalho descreve uma técnica de animação aplicada em LIBRAS. A idéia principal é, baseado na descrição da animação de um determinado sinal, executar seu movimento de forma mais, ou menos ampla para que se consiga aproveitar o espaço disponível para a gesticulação, sem entretanto perder o significado do sinal. A animação computacional de um sinal deve o mais próximo possível do real, ou seja, seu significado deve ser facilmente entendido e sua execução deve ser natural (suave e contínua). Para isso os sinais devem ser definidos de acordo com as limitações de movimentação das articulações humanas, bem como ao campo de visão do receptor. Além disso alguns parâmetros devem ser analisados e definidos: velocidade do movimento, tempo e amplitude dos sinais. Outro aspecto importante a ser tratado é o espaço que é disponível para a execução do sinal: dependendo do espaço, o sinal deve ser animado de forma a se adequar a ele. A implementação da técnica resultou em um sistema de animação para LIBRAS composto por três módulos: • um modelador do humano virtual, de forma que as articulações e DOFs deste sejam anatomicamente coerentes com a realidade; • um gerador de gestos, o qual é responsável pela transformação dos parâmetros como velocidade, tempo de execução do gesto, configuração das juntas, em um arquivo que descreve a animação da pose. Cabe ressaltar que as palavras em LIBRAS são conhecidas como sinais. Já um sinal é composto por um ou vários gestos e estes são compostos por poses; • um animador, o qual é responsável por gerar a animação de um sinal previamente criado, adequando (se necessário) a amplitude deste sinal ao espaço disponível para a execução do mesmo. O sistema criado foi submetido a testes para que a técnica fosse validada. O que se buscou com os testes foi verificar se os sinais gerados eram passíveis de entendimento, ou seja, se a animação gerada representava determinada palavra. Todos os aspectos acima mencionados são apresentados e analisados em detalhes. / Deaf people have a limited capacity of using oral language to communicate. Because of this, they use gestural languages as their native language. This makes it especially difficult for them to make use of basic services in a satisfactory way and to properly integrate the hearing world, to which the majority of the population belongs. Due to the fact that this language is only gestural, it is possible to say that the signs it comprises of can be simulated with the animation of virtual humans without losing the correct perception of their inherent meanings (what words they represent). This work describes a technique of animation for LIBRAS. The main idea is to take the movement of a sign from a description of its animation and execute it in a more or less wide manner in order to better use the available space for gesticulation without losing the meaning. The computer animation of a sign must be as close to the real gesture as possible. Its meaning must be easily understood and its execution must be natural (smooth and continuous). For that, the signs must be defined in accordance with the movement limitations imposed by the human joints, and the field of view of the receiver. Besides that, some relevant parameters must be analyzed and defined: speed of the movement, time and amplitude of the signs. Another important aspect to be addressed is the space that is available for the execution of the sign: depending on the area, the sign must be animated in a manner that makes it properly fit in it. The implementation of the technique resulted in a animation system for LIBRAS, that consists of three modules: • a virtual human modeler, so that the joints and DOFs are anatomically consistent with reality; • a gesture generator, which is responsible for the processing of parameters such as speed, time of execution of the gesture, joint configuration, in a file that describes the animation of the pose. It is worth emphasizing that the words in LIBRAS are known as signs. Already a sign is composed of one or more gestures and they are composed of poses; • an animator, which is responsible for generating the animation of a previously created sign, fitting (if necessary) the sign amplitude to the space available for its animation. The generated system has been submitted for tests in order to validate the technique. The goal of the tests was to check whether the generated signs were understandable - if the generated animation represented a certain word. All aspects above are presented and analyzed in detail.
10

A Virtual Human Animation Tool Using Motion Capture Data

Nar, Selim 01 July 2008 (has links) (PDF)
In this study, we developed an animation tool to animate 3D virtual characters. The tool offers facilities to integrate motion capture data with a 3D character mesh and animate the mesh by using Skeleton Subsurface Deformation and Dual Quaternion Skinning Methods. It is a compact tool, so it is possible to distribute, install and use the tool with ease. This tool can be used to illustrate medical kinematic gait data for educational purposes. For validation, we obtained medical motion capture data from two separate sources and animated a 3D mesh model by using this data. The animations are presented to physicians for evaluation. The results show that the tool is sufficient in displaying obvious gait patterns of the patients. The tool provides interactivity for inspecting the movements of patient from different angles and distances. We animate anonymous virtual characters which provide anonymity of the patient.

Page generated in 0.0608 seconds