Spelling suggestions: "subject:"doses."" "subject:"posed.""
1 |
Les mythes de Marcel Dugas : jeux de masques et posture ironiqueBoisvert, Véronique January 2006 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
2 |
Week 14, Video 02: Contact PoseMarlow, Gregory 01 January 2020 (has links)
https://dc.etsu.edu/digital-animation-videos-oer/1088/thumbnail.jpg
|
3 |
Week 14, Video 03: Other Contact PosesMarlow, Gregory 01 January 2020 (has links)
https://dc.etsu.edu/digital-animation-videos-oer/1089/thumbnail.jpg
|
4 |
Week 14, Video 04: Remaining PosesMarlow, Gregory 01 January 2020 (has links)
https://dc.etsu.edu/digital-animation-videos-oer/1090/thumbnail.jpg
|
5 |
Local pose estimation of feature points for object based augmented reality. / Detecção de poses locais de pontos de interesse para realidade aumentada baseadas em objetos.Tokunaga, Daniel Makoto 27 June 2016 (has links)
Usage of real objects as links between real and virtual information is one key aspect in augmented reality. A central issue to achieve this link is the estimation of the visuospatial information of the observed object, or in other words, estimating the object pose. Different objects can have different behaviors when used for interaction. This not only encompasses changes in position, but also folding or deformations. Traditional researches in the area solve those pose estimation problems using different approaches, depending on the type of the object. Additionally, some researches are based only on positional information of observed feature points, simplifying the object information. In this work, we explore the pose estimation of different objects by gathering more information from the observed feature points, and obtaining the local poses of such points, which are not explored in other researches. We apply this local pose estimation idea in two different capturing scenarios, reaching two novel approaches of pose estimation: one based on RGB-D cameras, and another based on RGB and machine learning methods. In the RGB-D based approach, we use the feature point orientation and near surface to obtain its normal; then, find the local 6 degrees-of-freedom (DoF) pose. This approach gives us not only the rigid object pose, but also the approximated pose of deformed objects. On the other hand, our RGB based approach explores machine learning with local appearance changes. Unlike other RGB based works, we replace the complex non-linear systems solvers with a fast and robust method, reaching local rotation of the observed feature points, as well as, full 6 DoF rigid object pose with dramatically lower real-time calculation demands. Both approaches show us that gathering local poses can bring information for the pose estimation of different types of objects. / O uso de objetos reais como meio de conexão entre informações reais e virtuais é um aspecto chave dentro da realidade aumentada. Uma questão central para tal conexão é a estimativa de informações visuo-espaciais do objeto, ou em outras palavras, a detecção da pose do objeto. Diferentes objetos podem ter diferentes comportamentos quando utilizados em interações. Não somente incluindo a mudança de posição, mas também sendo dobradas ou deformadas. Pesquisas tradicionais solucionam tais problemas de detecção usando diferentes abordagens, dependendo do tipo de objeto. Adicionalmente, algumas pesquisas se baseiam somente na informação posicional dos pontos de interesse, simplificando a informação do objeto. Neste trabalho, a detecção de pose de diferente objetos é explorada coletando-se mais informações dos pontos de interesse observados e, por sua vez, obtendo as poses locais de tais pontos, poses que não são exploradas em outras pesquisas. Este conceito da detecção de pose locais é aplicada em dois ambientes de capturas, estendendo-se em duas abordagens inovadoras: uma baseada em câmeras RGB-D, e outra baseada em câmeras RGB e métodos de aprendizado de maquinas. Na abordagem baseada em RGB-D, a orientação e superfície ao redor do ponto de interesse são utilizadas para obter a normal do ponto. Através de tais informações a pose local é obtida. Esta abordagem não só permite a obtenção de poses de objetos rígidos, mas também a pose aproximada de objetos deformáveis. Por outro lado, a abordagem baseada em RGB explora o aprendizado de máquina aplicado em alterações das aparências locais. Diferentemente de outros trabalhos baseados em câmeras RGB, esta abordagem substitui solucionadores não lineares complexos com um método rápido e robusto, permitindo a obtenção de rotações locais dos pontos de interesse, assim como, a pose completa (com 6 graus-de-liberdade) de objetos rígidos, com uma demanda computacional muito menor para cálculos em tempo-real. Ambas as abordagens mostram que a coleta de poses locais podem gerar informações para a detecção de poses de diferentes tipos de objetos.
|
6 |
Local pose estimation of feature points for object based augmented reality. / Detecção de poses locais de pontos de interesse para realidade aumentada baseadas em objetos.Daniel Makoto Tokunaga 27 June 2016 (has links)
Usage of real objects as links between real and virtual information is one key aspect in augmented reality. A central issue to achieve this link is the estimation of the visuospatial information of the observed object, or in other words, estimating the object pose. Different objects can have different behaviors when used for interaction. This not only encompasses changes in position, but also folding or deformations. Traditional researches in the area solve those pose estimation problems using different approaches, depending on the type of the object. Additionally, some researches are based only on positional information of observed feature points, simplifying the object information. In this work, we explore the pose estimation of different objects by gathering more information from the observed feature points, and obtaining the local poses of such points, which are not explored in other researches. We apply this local pose estimation idea in two different capturing scenarios, reaching two novel approaches of pose estimation: one based on RGB-D cameras, and another based on RGB and machine learning methods. In the RGB-D based approach, we use the feature point orientation and near surface to obtain its normal; then, find the local 6 degrees-of-freedom (DoF) pose. This approach gives us not only the rigid object pose, but also the approximated pose of deformed objects. On the other hand, our RGB based approach explores machine learning with local appearance changes. Unlike other RGB based works, we replace the complex non-linear systems solvers with a fast and robust method, reaching local rotation of the observed feature points, as well as, full 6 DoF rigid object pose with dramatically lower real-time calculation demands. Both approaches show us that gathering local poses can bring information for the pose estimation of different types of objects. / O uso de objetos reais como meio de conexão entre informações reais e virtuais é um aspecto chave dentro da realidade aumentada. Uma questão central para tal conexão é a estimativa de informações visuo-espaciais do objeto, ou em outras palavras, a detecção da pose do objeto. Diferentes objetos podem ter diferentes comportamentos quando utilizados em interações. Não somente incluindo a mudança de posição, mas também sendo dobradas ou deformadas. Pesquisas tradicionais solucionam tais problemas de detecção usando diferentes abordagens, dependendo do tipo de objeto. Adicionalmente, algumas pesquisas se baseiam somente na informação posicional dos pontos de interesse, simplificando a informação do objeto. Neste trabalho, a detecção de pose de diferente objetos é explorada coletando-se mais informações dos pontos de interesse observados e, por sua vez, obtendo as poses locais de tais pontos, poses que não são exploradas em outras pesquisas. Este conceito da detecção de pose locais é aplicada em dois ambientes de capturas, estendendo-se em duas abordagens inovadoras: uma baseada em câmeras RGB-D, e outra baseada em câmeras RGB e métodos de aprendizado de maquinas. Na abordagem baseada em RGB-D, a orientação e superfície ao redor do ponto de interesse são utilizadas para obter a normal do ponto. Através de tais informações a pose local é obtida. Esta abordagem não só permite a obtenção de poses de objetos rígidos, mas também a pose aproximada de objetos deformáveis. Por outro lado, a abordagem baseada em RGB explora o aprendizado de máquina aplicado em alterações das aparências locais. Diferentemente de outros trabalhos baseados em câmeras RGB, esta abordagem substitui solucionadores não lineares complexos com um método rápido e robusto, permitindo a obtenção de rotações locais dos pontos de interesse, assim como, a pose completa (com 6 graus-de-liberdade) de objetos rígidos, com uma demanda computacional muito menor para cálculos em tempo-real. Ambas as abordagens mostram que a coleta de poses locais podem gerar informações para a detecção de poses de diferentes tipos de objetos.
|
7 |
Airborne mapping using LIDAR / Luftburen kartering med LIDARAlmqvist, Erik January 2010 (has links)
<p>Mapping is a central and common task in robotics research. Building an accurate map without human assistance provides several applications such as space missions, search and rescue, surveillance and can be used in dangerous areas. One application for robotic mapping is to measure changes in terrain volume. In Sweden there are over a hundred landfills that are regulated by laws that says that the growth of the landfill has to be measured at least once a year.</p><p>In this thesis, a preliminary study of methods for measuring terrain volume by the use of an Unmanned Aerial Vehicle (UAV) and a Light Detection And Ranging (LIDAR) sensor is done. Different techniques are tested, including data merging strategies and regression techniques by the use of Gaussian Processes. In the absence of real flight scenario data, an industrial robot has been used fordata acquisition. The result of the experiment was successful in measuring thevolume difference between scenarios in relation to the resolution of the LIDAR. However, for more accurate volume measurements and better evaluation of the algorithms, a better LIDAR is needed.</p> / <p>Kartering är ett centralt och vanligt förekommande problem inom robotik. Att bygga en korrekt karta av en robots omgivning utan mänsklig hjälp har en mängd tänkbara användningsområden. Exempel på sådana är rymduppdrag, räddningsoperationer,övervakning och användning i områden som är farliga för människor. En tillämpning för robotkartering är att mäta volymökning hos terräng över tiden. I Sverige finns det över hundra soptippar, och dessa soptippar är reglerade av lagar som säger att man måste mäta soptippens volymökning minst en gång om året.</p><p>I detta exjobb görs en undersökning av möjligheterna att göra dessa volymberäkningarmed hjälp av obemannade helikoptrar utrustade med en Light Detectionand Ranging (LIDAR) sensor. Olika tekniker har testats, både tekniker som slår ihop LIDAR data till en karta och regressionstekniker baserade på Gauss Processer. I avsaknad av data inspelad med riktig helikopter har ett experiment med en industri robot genomförts för att samla in data. Resultaten av volymmätningarnavar goda i förhållande till LIDAR-sensorns upplösning. För att få bättre volymmätningaroch bättre utvärderingar av de olika algoritmerna är en bättre LIDAR-sensor nödvändig.</p>
|
8 |
Airborne mapping using LIDAR / Luftburen kartering med LIDARAlmqvist, Erik January 2010 (has links)
Mapping is a central and common task in robotics research. Building an accurate map without human assistance provides several applications such as space missions, search and rescue, surveillance and can be used in dangerous areas. One application for robotic mapping is to measure changes in terrain volume. In Sweden there are over a hundred landfills that are regulated by laws that says that the growth of the landfill has to be measured at least once a year. In this thesis, a preliminary study of methods for measuring terrain volume by the use of an Unmanned Aerial Vehicle (UAV) and a Light Detection And Ranging (LIDAR) sensor is done. Different techniques are tested, including data merging strategies and regression techniques by the use of Gaussian Processes. In the absence of real flight scenario data, an industrial robot has been used fordata acquisition. The result of the experiment was successful in measuring thevolume difference between scenarios in relation to the resolution of the LIDAR. However, for more accurate volume measurements and better evaluation of the algorithms, a better LIDAR is needed. / Kartering är ett centralt och vanligt förekommande problem inom robotik. Att bygga en korrekt karta av en robots omgivning utan mänsklig hjälp har en mängd tänkbara användningsområden. Exempel på sådana är rymduppdrag, räddningsoperationer,övervakning och användning i områden som är farliga för människor. En tillämpning för robotkartering är att mäta volymökning hos terräng över tiden. I Sverige finns det över hundra soptippar, och dessa soptippar är reglerade av lagar som säger att man måste mäta soptippens volymökning minst en gång om året. I detta exjobb görs en undersökning av möjligheterna att göra dessa volymberäkningarmed hjälp av obemannade helikoptrar utrustade med en Light Detectionand Ranging (LIDAR) sensor. Olika tekniker har testats, både tekniker som slår ihop LIDAR data till en karta och regressionstekniker baserade på Gauss Processer. I avsaknad av data inspelad med riktig helikopter har ett experiment med en industri robot genomförts för att samla in data. Resultaten av volymmätningarnavar goda i förhållande till LIDAR-sensorns upplösning. För att få bättre volymmätningaroch bättre utvärderingar av de olika algoritmerna är en bättre LIDAR-sensor nödvändig.
|
9 |
Pedagogika jogových poloh vzhledem ke zdraví populace se změřením na záklonové polohy a jejich přínos pro fyzioterapii. / Pedagogy of yoga due to population heath and targeted to extension positions and their effectivness to physiotherapy.Kozáková, Kristýna January 2018 (has links)
Title: Pedagogy of yoga due to population heath and targeted to extension positions and their effectivness to physiotherapy Objectives: The aim of the thesis is to clarify and collect sufficient information in the theoretical part about the anatomical, kinesiological and biomechanical aspects of the bending positions. After that, I will try to elaborate the theoretical background of yoga, its history, philosophy and a detailed description of selected bending positions (Usthrasana, Bhudjangasana, Urdhva mukha shvanasana), which are based on yoga. Grounded on the research questions in the practical part, I will assess whether there are discrepancies between the theoretical basis of the positions and the way they are taught in the classes today. I will evaluate how are the lessons currently taught in open yoga classes, whether they include those positions or not. I will also point out the use of compensatory aids and the degree education plays in this field. An integral piece of the practical part will be a summary of whether yoga classes take into an account the health of the clients. Methods: This is the theoretical-empirical character of the work. The research method is the observation and form of interview with yoga instructors on publicly accessible yoga lessons. Results: It has been confirmed...
|
10 |
Real-time Software Hand Pose Recognition using Single View Depth ImagesAlberts, Stefan Francois 04 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: The fairly recent introduction of low-cost depth sensors such as Microsoft’s Xbox Kinect
has encouraged a large amount of research on the use of depth sensors for many
common Computer Vision problems. Depth images are advantageous over normal
colour images because of how easily objects in a scene can be segregated in real-time.
Microsoft used the depth images from the Kinect to successfully separate multiple
users and track various larger body joints, but has difficulty tracking smaller joints
such as those of the fingers. This is a result of the low resolution and noisy nature of
the depth images produced by the Kinect.
The objective of this project is to use the depth images produced by the Kinect to
remotely track the user’s hands and to recognise the static hand poses in real-time.
Such a system would make it possible to control an electronic device from a distance
without the use of a remote control. It can be used to control computer systems during
computer aided presentations, translate sign language and to provide more hygienic
control devices in clean rooms such as operating theatres and electronic laboratories.
The proposed system uses the open-source OpenNI framework to retrieve the depth
images from the Kinect and to track the user’s hands. Random Decision Forests are
trained using computer generated depth images of various hand poses and used to
classify the hand regions from a depth image. The region images are processed using
a Mean-Shift based joint estimator to find the 3D joint coordinates. These coordinates
are finally used to classify the static hand pose using a Support Vector Machine trained
using the libSVM library. The system achieves a final accuracy of 95.61% when tested
against synthetic data and 81.35% when tested against real world data. / AFRIKAANSE OPSOMMING: Die onlangse bekendstelling van lae-koste diepte sensors soos Microsoft se Xbox Kinect
het groot belangstelling opgewek in navorsing oor die gebruik van die diepte sensors
vir algemene Rekenaarvisie probleme. Diepte beelde maak dit baie eenvoudig om
intyds verskillende voorwerpe in ’n toneel van mekaar te skei. Microsoft het diepte
beelde van die Kinect gebruik om verskeie persone en hul ledemate suksesvol te volg.
Dit kan egter nie kleiner ledemate soos die vingers volg nie as gevolg van die lae resolusie
en voorkoms van geraas in die beelde.
Die doel van hierdie projek is om die diepte beelde (verkry vanaf die Kinect) te gebruik
om intyds ’n gebruiker se hande te volg oor ’n afstand en die statiese handgebare
te herken. So ’n stelsel sal dit moontlik maak om elektroniese toestelle oor ’n afstand
te kan beheer sonder die gebruik van ’n afstandsbeheerder. Dit kan gebruik word om
rekenaarstelsels te beheer gedurende rekenaargesteunde aanbiedings, vir die vertaling
van vingertaal en kan ook gebruik word as higiëniese, tasvrye beheer toestelle in
skoonkamers soos operasieteaters en elektroniese laboratoriums. Die voorgestelde stelsel maak gebruik van die oopbron OpenNI raamwerk om die
diepte beelde vanaf die Kinect te lees en die gebruiker se hande te volg. Lukrake
Besluitnemingswoude ("Random Decision Forests") is opgelei met behulp van rekenaar
gegenereerde diepte beelde van verskeie handgebare en word gebruik om die
verskeie handdele vanaf ’n diepte beeld te klassifiseer. Die 3D koördinate van die hand
ledemate word dan verkry deur gebruik te maak van ’n Gemiddelde-Afset gebaseerde
ledemaat herkenner. Hierdie koördinate word dan gebruik om die statiese handgebaar
te klassifiseer met behulp van ’n Steun-Vektor Masjien ("Support Vector Machine"),
opgelei met behulp van die libSVM biblioteek. Die stelsel behaal ’n finale akkuraatheid
van 95.61% wanneer dit getoets word teen sintetiese data en 81.35% wanneer getoets
word teen werklike data.
|
Page generated in 0.0472 seconds