• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 36
  • 31
  • 13
  • 13
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Prosthetic vision : Visual modelling, information theory and neural correlates

Hallum, Luke Edward, Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Electrical stimulation of the retina affected by photoreceptor loss (e.g., cases of retinitis pigmentosa) elicits the perception of luminous spots (so-called phosphenes) in the visual field. This phenomenon, attributed to the relatively high survival rates of neurons comprising the retina's inner layer, serves as the cornerstone of efforts to provide a microelectronic retinal prosthesis -- a device analogous to the cochlear implant. This thesis concerns phosphenes -- their elicitation and modulation, and, in turn, image analysis for use in a prosthesis. This thesis begins with a comparative review of visual modelling of electrical epiretinal stimulation and analogous acoustic modelling of electrical cochlear stimulation. The latter models involve coloured noise played to normal listeners so as to investigate speech processing and electrode design for use in cochlear implants. Subsequently, four experiments (three psychophysical and one numerical), and two statistical analyses, are presented. Intrinsic signal optical imaging in cerebral cortex is canvassed appendically. The first experiment describes a visual tracking task administered to 20 normal observers afforded simulated prosthetic vision. Fixation, saccade, and smooth pursuit, and the effect of practice, were assessed. Further, an image analysis scheme is demonstrated that, compared to existing approaches, assisted fixation and pursuit (but not saccade) accuracy (35.8% and 6.8%, respectively), and required less phosphene array scanning. Subsequently, (numerical) information-theoretic reasoning is provided for the scheme's superiority. This reasoning was then employed to further optimise the scheme (resulting in a filter comprising overlapping Gaussian kernels), and may be readily extended to arbitrary arrangements of many phosphenes. A face recognition study, wherein stimuli comprised either size- or intensity-modulated phosphenes, is then presented. The study involved unpracticed observers (n=85), and showed no 'size' --versus--'intensity' effect. Overall, a 400-phosphene (100-phosphene) image afforded subjects 89.0% (64.0%) correct recognition (two-interval forced-choice paradigm) when five seconds' scanning was allowed. Performance fell (64.5%) when the 400-phosphene image was stabilised on the retina and presented briefly. Scanning was similar in 400- and 100-phosphene tasks. The final chapter presents the statistical effects of sampling and rendering jitter on the phosphene image. These results may generalise to low-resolution imaging systems involving loosely packed pixels.
52

Système de vision hybride à fovéation pour la vidéo-surveillance et la navigation robotique / Hybrid foveated vision system for video surveillance and robotic navigation

Rameau, François 02 December 2014 (has links)
L'objectif principal de ce travail de thèse est l'élaboration d'un système de vision binoculaire mettant en oeuvre deux caméras de types différents. Le système étudié est constitué d'une caméra de type omnidirectionnelle associée à une caméra PTZ. Nous appellerons ce couple de caméras un système de vision hybride. L'utilisation de ce type de capteur fournit une vision globale de la scène à l'aide de la caméra omnidirectionnelle tandis que l'usage de la caméra mécanisée permet une fovéation, c'est-à-dire l'acquisition de détails, sur une région d'intérêt détectée depuis l'image panoramique.Les travaux présentés dans ce manuscrit ont pour objet, à la fois de permettre le suivi d'une cible à l'aide de notre banc de caméras mais également de permettre une reconstruction 3D par stéréoscopie hybride de l'environnement nous permettant d'étudier le déplacement du robot équipé du capteur. / The primary goal of this thesis is to elaborate a binocular vision system using two different types of camera. The system studied here is composed of one omnidirectional camera coupled with a PTZ camera. This heterogeneous association of cameras having different characteristics is called a hybrid stereo-vision system. The couple composed of these two cameras combines the advantages given by both of them, that is to say a large field of view and an accurate vision of a particular Region of interest with an adjustable level of details using the zoom. In this thesis, we are presenting multiple contributions in visual tracking using omnidirectional sensors, PTZ camera self calibration, hybrid vision system calibration and structure from motion using a hybrid stereo-vision system.
53

Robust micro/nano-positioning by visual servoing / Micro et nano-positionnement robuste par l'asservissement visuel

Cui, Le 26 January 2016 (has links)
Avec le développement des nanotechnologies, il est devenu possible et souhaitable de créer et d'assembler des nano-objets. Afin d'obtenir des processus automatisés robustes et fiables, la manipulation à l'échelle nanométrique est devenue, au cours des dernières années, une tâche primordiale. La vision est un moyen indispensable pour observer le monde à l'échelle micrométrique et nanométrique. Le contrôle basé sur la vision est une solution efficace pour les problèmes de contrôle de la robotique. Dans cette thèse, nous abordons la problématique du micro- et nano-positionnement par asservissement visuel via l'utilisation d'un microscope électronique à balayage (MEB). Dans un premier temps, la formation d'image MEB et les modèles géométriques de la vision appliqués aux MEB sont étudiés afin de présenter, par la suite, une méthode d'étalonnage de MEB par l'optimisation non-linéaire considérant les modèles de projection perspective et parallèle. Dans cette étude, il est constaté qu'il est difficile d'observer l'information de profondeur à partir de la variation de la position de pixel de l'échantillon dans l'image MEB à un grossissement élevé. Afin de résoudre le problème de la non-observabilité du mouvement dans l'axe de la profondeur du MEB, les informations de défocalisation d'image sont considérées comme caractéristiques visuelles pour commander le mouvement sur cet axe. Une méthode d'asservissement visuelle hybride est alors proposée pour effectuer le micro-positionnement en 6 degrés de liberté en utilisant les informations de défocalisation d'image et de photométrique d'image. Cette méthode est ensuite validée via l'utilisation d'un robot parallèle dans un MEB. Finalement, un système de contrôle en boucle fermée pour l'autofocus du MEB est introduit et validé par des expériences. Une méthode de suivi visuel et d'estimation de pose 3D, par la mise en correspondance avec un modèle de texture, est proposée afin de réaliser le guidage visuel dans un MEB. Cette méthode est robuste au flou d'image à cause de la défocalisation provoquée par le mouvement sur l'axe de la profondeur car le niveau de défocalisation est modélisée dans ce cadre de suivi visuel. / With the development of nanotechnology, it became possible to design and assemble nano-objects. For robust and reliable automation processes, handling and manipulation tasks at the nanoscale is increasingly required over the last decade. Vision is one of the most indispensable ways to observe the world in micrioscale and nanoscale. Vision-based control is an efficient solution for control problems in robotics. In this thesis, we address the issue of micro- and nano-positioning by visual servoing in a Scanning Electron Microscope (SEM). As the fundamental knowledge, the SEM image formation and SEM vision geometry models are studied at first. A nonlinear optimization process for SEM calibration has been presented considering both perspective and parallel projection model. In this study, it is found that it is difficult to observe the depth information from the variation of the pixel position of the sample in SEM image at high magnification. In order to solve the problem that the motion along the depth direction is not observable in a SEM, the image defocus information is considered as a visual feature to control the motion along the depth direction. A hybrid visual servoing scheme has been proposed for 6-DoF micro-positioning task using both image defocus information and image photometric information. It has been validated using a parallel robot in a SEM. Based on the similar idea, a closed-loop control scheme for SEM autofocusing task has been introduced and validated by experiments. In order to achieve the visual guidance in a SEM, a template-based visual tracking and 3D pose estimation framework has been proposed. This method is robust to the defocus blur caused by the motion along the depth direction since the defocus level is modeled in the visual tracking framework.
54

Visual Tracking Using Deep Motion Features / Visuell följning med hjälp av djup inlärning och optiskt flöde

Gladh, Susanna January 2016 (has links)
Generic visual tracking is a challenging computer vision problem, where the position of a specified target is estimated through a sequence of frames. The only given information is the initial location of the target. Therefore, the tracker has to adapt and learn any kind of object, which it describes through visual features used to differentiate target from background. Standard appearance features only capture momentary visual information. This master’s thesis investigates the use of deep features extracted through optical flow images processed in a deep convolutional network. The optical flow is calculated using two consecutive images, and thereby captures the dynamic nature of the scene. Results show that this information is complementary to the standard appearance features, and improves performance of the tracker. Deep features are typically very high dimensional. Employing dimensionality reduction can increase both the efficiency and performance of the tracker. As a second aim in this thesis, PCA and PLS were evaluated and compared. The evaluations show that the two methods are almost equal in performance, with PLS actually receiving slightly better score than the popular PCA. The final proposed tracker was evaluated on three challenging datasets, and was shown to outperform other state-of-the-art trackers.
55

Stability of a Vision Based Platooning System

Köling, Ann, Kjellberg, Kristina January 2021 (has links)
The current development of autonomous vehiclesallow for several new applications to form and evolve. One ofthese are platooning, where several vehicles drive closely togetherwith automatic car following. The method of getting informationabout the other vehicles in a platoon can vary. One of thesemethods is using visual information from a camera. Having acamera on-board an autonomous vehicle has further potential, forexample for recognition of objects in the vehicle’s surroundings.This bachelor thesis uses small RC vehicles to test an example ofa vision based platooning system. The system is then evaluatedusing a step response, from which the stability of the systemis analyzed. Additionally, a previously developed communicationbased platooning system was tested in the same way and it’sstability compared. The main conclusion of this thesis is that it isfeasible to use a camera, ArUco marker and an Optimal VelocityRelative Velocity model to achieve a vision based platoon on asmall set of RC vehicles. / Forskningsframsteg inom området autonoma fordon möjliggör utveckling av ett flertal nya tillämpningar. En av dessa är platooning, som innebär att flera fordon kör nära varandra med automatisk farthållning. Metoden för att erhålla information om de andra fordonen i platoonen kan variera. En av dessa metoder är att använda visuell information från en kamera. Att ha en kamera ombord på ett autonomt fordon har stor potential, exempelvis för detektering av objekt i fordonets omgivning. Det här kandidatexamensarbetet använder små radiostyrda bilar för att testa ett exempel av ett kamerabaserat platooning-system. Systemet är sedan utvärderat med hjälp av ett stegsvar, från vilket stabiliteten av systemet är analyserat. Dessutom testas ett tidigare utvecklat kommunikationsbaserat platooning-system, hittills bara testat i simulering, på samma uppsättning bilar. Den huvudsakliga slutsatsen av detta arbete är att det är möjligt att använda en kamera, ArUco markör och en Optimal Velocity Relative Velocity modell för att uppnå kamerabaserad platoon med en liten uppsättning radiostyrda bilar. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
56

Robust visual detection and tracking of complex objects : applications to space autonomous rendez-vous and proximity operations

Petit, Antoine 19 December 2013 (has links) (PDF)
In this thesis, we address the issue of fully localizing a known object through computer vision, using a monocular camera, what is a central problem in robotics. A particular attention is here paid on space robotics applications, with the aims of providing a unified visual localization system for autonomous navigation purposes for space rendezvous and proximity operations. Two main challenges of the problem are tackled: initially detecting the targeted object and then tracking it frame-by-frame, providing the complete pose between the camera and the object, knowing the 3D CAD model of the object. For detection, the pose estimation process is based on the segmentation of the moving object and on an efficient probabilistic edge-based matching and alignment procedure of a set of synthetic views of the object with a sequence of initial images. For the tracking phase, pose estimation is handled through a 3D model-based tracking algorithm, for which we propose three different types of visual features, pertinently representing the object with its edges, its silhouette and with a set of interest points. The reliability of the localization process is evaluated by propagating the uncertainty from the errors of the visual features. This uncertainty besides feeds a linear Kalman filter on the camera velocity parameters. Qualitative and quantitative experiments have been performed on various synthetic and real data, with challenging imaging conditions, showing the efficiency and the benefits of the different contributions, and their compliance with space rendezvous applications.
57

Verbetering van visueel–motoriese integrasie by 6– tot 8–jarige kinders met Aandaggebrekhiperaktiwiteitsindroom / van Wyk J.

Van Wyk, Yolanda January 2011 (has links)
The visual system and good ocular motor control play an important role in the effective development of gross motor, sport, fine motor and academic skills (Erhardt et al., 1988:84; Desrocher, 1999:36; Orfield, 2001:114). Various researchers report a link between ocular motor problems and attention–deficit hyperactivity disorder (ADHD) (Cheatum & Hammond, 2000:263; Farrar et al., 2001:441; Gould et al., 2001:633; Armstrong & Munoz, 2003:451; Munoz et al., 2003:510; Borsting et al., 2005:588; Hanisch et al., 2005:671; Mason et al., 2005:1345; Loe et al., 2009:432). A few studies were carried out to analyse the links between ADHD and ocular motor control with regard to matters like visual attention, visual perception and ocular motor control like eye movement outside the normal fixation point, but no studies have been reported on the status of the ocular motor control of South African populations, and the effect of visual–motor intervention on the ocular motor control or visual–motor integration of learners with ADHD. The aim of the study was twofold, namely firstly to determine the ocular motor control functions and status of visual–motor integration of a selected group of 6– tot 8–year–old learners with ADHD in Brakpan, South Africa, while the second aim was to determine whether a visualmotor– based intervention programme can improve the ocular motor control and status of the visual–motor integration of a selected group of 6– to 8–year–old learners with ADHD in Brakpan, South Africa. Statistica for Windows 2010 was used to analyse the data. The Sensory Input Screening measuring instrument and the Quick Neurological Screening Test II (QNST–II) were used to assess the ocular motor control functions (fixation, ocular alignment, visual tracking and convergence–divergence), while the Beery Developmental Test of Visual–Motor Integration (VMI–4de weergawe) was used to determine the status of the learners’ visual–motor integration (VMI), visual perception (VP) and motor coordination (MC). The Disruptive Behaviour Scale, a checklist for ADHD (Bester, 2006), was used as measuring instrument to identify the learners with ADHD. Fifty–six learners (31 boys, 25 girls, with an average age of 7,03 years +0,65) participated in the pre–test and were divided into an ADHD (n=39) and a non–ADHD (n=16) group for aim one. Two–way tables were used to determine the percentage of ocular motor control deficits in the learners with and without ADHD, and an independent t–test was used to analyse the visual–motor integration of these learners. The Pearson Chi–squared test was used to determine the practical significance of differences in VMI and VP (d>0,05). The results of the study reveal that the majority of learners displayed ocular motor control deficits, regardless of whether they were classified with ADHD or not. The biggest percentage of learners fell into Class 2 (moderate deficits), particularly with regard to horizontal (68,57%; 52,63%; w=0,16) and vertical tracking (65,71%; 73,68%), as well as convergence–divergence (80%; 78,95%; w=0,11). However, it appears that ADHD learners experience more serious problems (Class 3) with visual tracking than learners without ADHD (both eyes: 22,86%; compared to 10,53% (w=0,22); right eye: 11,43% compared to 0% (p=0,05; w=0,34); left eye: 14,29% compared to 0% (p=0,02; w=0,38)). Learners with and without ADHD displayed a practically significant difference with respect to visual perception (d=0,37) and motor coordination (d=0,5) compared to learners without ADHD (who achieved better results). For aim 2 the subjects were divided into three groups. A pre–test–post–test design compiled from an availability sample of three groups (intervention group with ADHD (n=20); control group with ADHD (n=10) and control group without ADHD (n=17)) was used for this part of the study. The intervention group participated in a nine–week (3x/week and for 45 minutes) visualmotor– based intervention programme in which the ocular motor control functions section was applied for about 5 minutes per learner. Forty–seven learners (25 boys and 22 girls) with an average age of 6,95 years (+0,69) constituted the experimental group, while a control group with ADHD with an average age of 7,2 years (+0,79) and a control group without ADHD with an average age of 7,12 years (+0,60) did not receive any intervention and just participated in the pre– and post–test opportunity. A two–way cross–tabulation table was used to determine the changes in ocular motor control functions. These results mainly revealed that practically significant changes occurred in all three groups, be it improvement or deterioration in the various classes of ocular motor control. It appears that as far as horizontal and vertical visual tracking is concerned, and with convergence–divergence, more subjects were moved back from Class 3 (serious cases) to Class 1 (no deficits) and 2 (moderate deficits) in particular than in the other two groups that had received no intervention. Independent t–testing was used to analyse intragroup differences in the visual–motor integration subdivisions, while a covariance analysis (ANCOVA) (corrected for pre–test differences) was used to determine adjusted average post–test difference values. These results revealed that the motor coordination of the intervention group improved more than that of the control group with ADHD (p=0,18). This can lead to the conclusion that the intervention programme did have an effect on this specific skill. Abstract The overall indications of the results are that learners with ADHD have a general tendency to achieve poorer results in ocular motor control tests and with skills involving visual–motor integration, visual perception and motor coordination than learners without ADHD. Although only a minor improvement was identified in the experimental group after participation in the intervention programme, it is recommended with regard to motor coordination in particular that a similar programme be compiled for ADHD learners that focuses more specifically on the ocular motor control needs of each learner, and that it be presented on a more individual basis in order to accomplish greater improvement. / Thesis (M.A. (Kinderkinetics))--North-West University, Potchefstroom Campus, 2012.
58

Verbetering van visueel–motoriese integrasie by 6– tot 8–jarige kinders met Aandaggebrekhiperaktiwiteitsindroom / van Wyk J.

Van Wyk, Yolanda January 2011 (has links)
The visual system and good ocular motor control play an important role in the effective development of gross motor, sport, fine motor and academic skills (Erhardt et al., 1988:84; Desrocher, 1999:36; Orfield, 2001:114). Various researchers report a link between ocular motor problems and attention–deficit hyperactivity disorder (ADHD) (Cheatum & Hammond, 2000:263; Farrar et al., 2001:441; Gould et al., 2001:633; Armstrong & Munoz, 2003:451; Munoz et al., 2003:510; Borsting et al., 2005:588; Hanisch et al., 2005:671; Mason et al., 2005:1345; Loe et al., 2009:432). A few studies were carried out to analyse the links between ADHD and ocular motor control with regard to matters like visual attention, visual perception and ocular motor control like eye movement outside the normal fixation point, but no studies have been reported on the status of the ocular motor control of South African populations, and the effect of visual–motor intervention on the ocular motor control or visual–motor integration of learners with ADHD. The aim of the study was twofold, namely firstly to determine the ocular motor control functions and status of visual–motor integration of a selected group of 6– tot 8–year–old learners with ADHD in Brakpan, South Africa, while the second aim was to determine whether a visualmotor– based intervention programme can improve the ocular motor control and status of the visual–motor integration of a selected group of 6– to 8–year–old learners with ADHD in Brakpan, South Africa. Statistica for Windows 2010 was used to analyse the data. The Sensory Input Screening measuring instrument and the Quick Neurological Screening Test II (QNST–II) were used to assess the ocular motor control functions (fixation, ocular alignment, visual tracking and convergence–divergence), while the Beery Developmental Test of Visual–Motor Integration (VMI–4de weergawe) was used to determine the status of the learners’ visual–motor integration (VMI), visual perception (VP) and motor coordination (MC). The Disruptive Behaviour Scale, a checklist for ADHD (Bester, 2006), was used as measuring instrument to identify the learners with ADHD. Fifty–six learners (31 boys, 25 girls, with an average age of 7,03 years +0,65) participated in the pre–test and were divided into an ADHD (n=39) and a non–ADHD (n=16) group for aim one. Two–way tables were used to determine the percentage of ocular motor control deficits in the learners with and without ADHD, and an independent t–test was used to analyse the visual–motor integration of these learners. The Pearson Chi–squared test was used to determine the practical significance of differences in VMI and VP (d>0,05). The results of the study reveal that the majority of learners displayed ocular motor control deficits, regardless of whether they were classified with ADHD or not. The biggest percentage of learners fell into Class 2 (moderate deficits), particularly with regard to horizontal (68,57%; 52,63%; w=0,16) and vertical tracking (65,71%; 73,68%), as well as convergence–divergence (80%; 78,95%; w=0,11). However, it appears that ADHD learners experience more serious problems (Class 3) with visual tracking than learners without ADHD (both eyes: 22,86%; compared to 10,53% (w=0,22); right eye: 11,43% compared to 0% (p=0,05; w=0,34); left eye: 14,29% compared to 0% (p=0,02; w=0,38)). Learners with and without ADHD displayed a practically significant difference with respect to visual perception (d=0,37) and motor coordination (d=0,5) compared to learners without ADHD (who achieved better results). For aim 2 the subjects were divided into three groups. A pre–test–post–test design compiled from an availability sample of three groups (intervention group with ADHD (n=20); control group with ADHD (n=10) and control group without ADHD (n=17)) was used for this part of the study. The intervention group participated in a nine–week (3x/week and for 45 minutes) visualmotor– based intervention programme in which the ocular motor control functions section was applied for about 5 minutes per learner. Forty–seven learners (25 boys and 22 girls) with an average age of 6,95 years (+0,69) constituted the experimental group, while a control group with ADHD with an average age of 7,2 years (+0,79) and a control group without ADHD with an average age of 7,12 years (+0,60) did not receive any intervention and just participated in the pre– and post–test opportunity. A two–way cross–tabulation table was used to determine the changes in ocular motor control functions. These results mainly revealed that practically significant changes occurred in all three groups, be it improvement or deterioration in the various classes of ocular motor control. It appears that as far as horizontal and vertical visual tracking is concerned, and with convergence–divergence, more subjects were moved back from Class 3 (serious cases) to Class 1 (no deficits) and 2 (moderate deficits) in particular than in the other two groups that had received no intervention. Independent t–testing was used to analyse intragroup differences in the visual–motor integration subdivisions, while a covariance analysis (ANCOVA) (corrected for pre–test differences) was used to determine adjusted average post–test difference values. These results revealed that the motor coordination of the intervention group improved more than that of the control group with ADHD (p=0,18). This can lead to the conclusion that the intervention programme did have an effect on this specific skill. Abstract The overall indications of the results are that learners with ADHD have a general tendency to achieve poorer results in ocular motor control tests and with skills involving visual–motor integration, visual perception and motor coordination than learners without ADHD. Although only a minor improvement was identified in the experimental group after participation in the intervention programme, it is recommended with regard to motor coordination in particular that a similar programme be compiled for ADHD learners that focuses more specifically on the ocular motor control needs of each learner, and that it be presented on a more individual basis in order to accomplish greater improvement. / Thesis (M.A. (Kinderkinetics))--North-West University, Potchefstroom Campus, 2012.
59

Inverse optimal control for redundant systems of biological motion / Contrôle optimal inverse de systèmes de mouvements biologiques redondants

Panchea, Adina 10 December 2015 (has links)
Cette thèse aborde les problèmes inverses de contrôle optimal (IOCP) pour trouver les fonctions de coûts pour lesquelles les mouvements humains sont optimaux. En supposant que les observations de mouvements humains sont parfaites, alors que le processus de commande du moteur humain est imparfait, nous proposons un algorithme de commande approximative optimale. En appliquant notre algorithme pour les observations de mouvement humaines collectées: mouvement du bras humain au cours d'une tâche de vissage industrielle, une tâche de suivi visuel d’une cible et une tâche d'initialisation de la marche, nous avons effectué une analyse en boucle ouverte. Pour les trois cas, notre algorithme a trouvé les fonctions de coût qui correspondent mieux ces données, tout en satisfaisant approximativement les Karush-Kuhn-Tucker (KKT) conditions d'optimalité. Notre algorithme offre un beau temps de calcul pour tous les cas, fournir une opportunité pour son utilisation dans les applications en ligne. Pour la tâche de suivi visuel d’une cible, nous avons étudié une modélisation en boucle fermée avec deux boucles de rétroaction PD. Avec des données artificielles, nous avons obtenu des résultats cohérents en termes de tendances des gains et les critères trouvent par notre algorithme pour la tâche de suivi visuel d’une cible. Dans la seconde partie de notre travail, nous avons proposé une nouvelle approche pour résoudre l’IOCP, dans un cadre d'erreur bornée. Dans cette approche, nous supposons que le processus de contrôle moteur humain est parfait tandis que les observations ont des erreurs et des incertitudes d'agir sur eux, étant imparfaite. Les erreurs sont délimitées avec des limites connues, sinon inconnu. Notre approche trouve l'ensemble convexe de de fonction de coût réalisables avec la certitude qu'il comprend la vraie solution. Nous numériquement garanties en utilisant des outils d'analyse d'intervalle. / This thesis addresses inverse optimal control problems (IOCP) to find the cost functions for which the human motions are optimal. Assuming that the human motion observations are perfect, while the human motor control process is imperfect, we propose an approximately optimal control algorithm. By applying our algorithm to the human motion observations collected for: the human arm trajectories during an industrial screwing task, a postural coordination in a visual tracking task and a walking gait initialization task, we performed an open loop analysis. For the three cases, our algorithm returned the cost functions which better fit these data, while approximately satisfying the Karush-Kuhn-Tucker (KKT) optimality conditions. Our algorithm offers a nice computational time for all cases, providing an opportunity for its use in online applications. For the visual tracking task, we investigated a closed loop modeling with two PD feedback loops. With artificial data, we obtained consistent results in terms of feedback gains’ trends and criteria exhibited by our algorithm for the visual tracking task. In the second part of our work, we proposed a new approach to solving the IOCP, in a bounded error framework. In this approach, we assume that the human motor control process is perfect while the observations have errors and uncertainties acting on them, being imperfect. The errors are bounded with known bounds, otherwise unknown. Our approach finds the convex hull of the set of feasible cost function with a certainty that it includes the true solution. We numerically guaranteed this using interval analysis tools.

Page generated in 0.0834 seconds