• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2119
  • 299
  • 272
  • 267
  • 174
  • 110
  • 58
  • 40
  • 36
  • 36
  • 34
  • 30
  • 25
  • 19
  • 16
  • Tagged with
  • 4282
  • 811
  • 743
  • 645
  • 365
  • 345
  • 339
  • 337
  • 307
  • 298
  • 297
  • 297
  • 287
  • 278
  • 269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Identification of key visual areas that guide an assembly process in real and virtual environments

Rojas-Murillo, Salvador 01 December 2017 (has links)
Today’s assembly operations represent about 15-70% of all manufacturing time and about 40% of all manufacturing costs, and manual assembly processes are still a significant portion of today’s assembly operations. Furthermore, today’s manufacturing environment requires a well-trained and flexible workforce that can easily adapt to changing products and processes. Unfortunately, manufacturing training is often performed using the master-apprentice model in the assembly line resulting in unsafe and expensive training conditions as this model is a slow and expensive process. Previous research has considered the use of virtual environments (VEs) for training purposes in different fields such as aviation, driving, construction, medicine, and manufacturing among many others. However, to this date, no assembly studies have been successful in providing a positive transfer of knowledge between virtual environments and real environments. On the other hand, several eye-tracking studies in radiology, air-traffic control, driving, and reading show that participants with higher levels of experience have different eye-scan patterns than participants with lower levels of experience. However, it is unknown how visual scans are affected by practice. Furthermore, several empirical visuomotor studies of task-oriented processes in real environments show that observers fixated their eyes on the areas that are crucial to the required task. However, we do not know the necessary visual elements to observe when performing and when learning how to perform an assembly task, nor the effects of following visual instructions and having visual distractors during this process. Finally, we have yet to establish what observation differences may exist between real and virtual environments with regards to these unknowns. This work presents the results of an assembly task which required participants to follow visual instructions and to select assembly objects among similar distractors. This assembly task was performed for ten cycles in real and virtual environments, and we used an eye-tracking device to register participants’ visual scans. We successfully identified the areas that are needed to observe for an assembly task in both environments and the effect of visual instructions and distractors in a visual scan. We found statistically significant differences for visual scans by assembly cycle and environment, with a p-value of <0.05. We also identified a connection between learning curves and participant eye scan, showing a significant decrease in the incidence of eye tracking metrics (visit count, visit duration, fixation count and fixation duration) between the first and the tenth cycles (ΔΜ), particularly for visual distractors ranging from 37.36% to 48.77%, and for visual instructions ranging from 35.17% to 54.82%. We found that participants’ observations became more efficient with practice, not only in terms of identifying distractors and following visual instructions but also in terms of developing an ability to observe key visual elements. For the RE we found a positive Pearson correlation between the proportion of fixation duration and assembly cycle for the key visual areas with p-values<0.002 and a negative Pearson correlation between the proportion of fixation duration for the non-key visual areas with p-values<0.046. Similar results were obtained for the VE.
202

Développement d'un système de tracking vidéo sur caméra robotisée / Development of a video tracking system on a robotic camera

Penne, Thomas 14 October 2011 (has links)
Ces dernières années se caractérisent par la prolifération des systèmes de vidéo-surveillance et par l’automatisation des traitements que ceux-ci intègrent. Parallèlement, le problème du suivi d’objets est devenu en quelques années un problème récurrent dans de nombreux domaines et notamment en vidéo-surveillance. Dans le cadre de cette thèse, nous proposons une nouvelle méthode de suivi d’objet, basée sur la méthode Ensemble Tracking et intégrant deux améliorations majeures. La première repose sur une séparation de l’espace hétérogène des caractéristiques en un ensemble de sous-espaces homogènes appelés modules et sur l’application, sur chacun d’eux, d’un algorithme basé Ensemble Tracking. La seconde adresse, quant à elle, l’apport d’une solution à la nouvelle problématique de suivi induite par cette séparation des espaces, à savoir la construction d’un filtre particulaire spécifique exploitant une pondération des différents modules utilisés afin d’estimer à la fois, pour chaque image de la séquence, la position et les dimensions de l’objet suivi, ainsi que la combinaison linéaire des différentes décisions modulaires conduisant à l’observation la plus discriminante. Les différents résultats que nous présentons illustrent le bon fonctionnement global et individuel de l’ensemble des propriétés spécifiques de la méthode et permettent de comparer son efficacité à celle de plusieurs algorithmes de suivi de référence. De plus, l’ensemble des travaux a fait l’objet d’un développement industriel sur les consoles de traitement de la société partenaire. En conclusion de ces travaux, nous présentons les perspectives que laissent entrevoir ces développements originaux, notamment en exploitant les possibilités offertes par la modularité de l’algorithme ou encore en rendant dynamique le choix des modules utilisés en fonction de l’efficacité de chacun dans une situation donnée. / Recent years have been characterized by the overgrowth of video-surveillance systems and by automation of treatments they integrate. At the same time, object tracking has become, within years, a recurring problem in many domains and particularly in video-surveillance. In this dissertation, we propose a new object tracking method, based on the Ensemble Tracking method and integrating two main improvements. The first one lies on the separation of the heterogeneous feature space into a set of homogenous sub-spaces called modules and on the application, on each of them, of an Ensemble Tracking-based algorithm. The second one deals with the new tracking problem induced by this separation by building a specific particle filter. This filter weights each used module in order to estimate, for each frame in the sequence, both position and dimensions of the tracked object and the linear combination of modular decisions leading to the most discriminative observation. The results we present illustrate the global and individual efficiency of all the specific properties of our method and allow comparing this efficiency with the one of several reference tracking algorithms. Furthermore, all this work has led to an industrial development on the treatment systems of the partner company. In conclusion of this work, we present the prospects generated by these original developments, more particularly using the possibilities offered by the algorith mmodularity or making the modules choice dynamic according to their efficiency in a given situation.
203

Eye Tracking During Interaction with a Screen / Eye Tracking During Interaction with a Screen

Pavelková, Alena January 2013 (has links)
Tato práce popisuje systém pro sledování obrazovky počítače v reálném čase a určení pozice uživatelova pohledu na tuto obrazovku s využitím dat z eye-trackovacích brýlí a Uniform Marker Fields. Vytvořený systém lokalizuje obrazovku počítače ve snímcích z kamery umístěné na eye-trackeru snímající scénu před uživatelem. V této scéně je detekován marker zobrazený na monitoru počítače. Po jeho úspěšné detekci je marker skryt a ke sledování pozice obrazovky jsou dále využívány významné body detekované ve snímku z kamery ležící na obrazovce, případně z celého snímku. Pokud sledování významných bodů selže, marker je znovu zobrazen a systém je znovu inicializován pomocí detekovaného markeru. Aby byl marker pro uživatele co nejméně nápadný a obtěžující, jsou využity data z eye-trackeru k tomu, aby byl marker zobrazen co nejdále od oblasti uživatelova zájmu. Pro vyhodnocení výkonu a přesnosti vytvořeného systému a rušivosti markeru pro uživatele bylo provedeno několik experimentů a testování s uživateli.
204

Visuelle und neuronale Verarbeitung von Emotionen

Roth, Katharina 29 September 2011 (has links)
Die Kombination von Eyetracking und fMRI in den Neurowissenschaften ist eine relativ neue Methode, die einerseits eine technische Herauforderung darstellt, andererseits neue Möglichkeiten des Zugangs zu neuronalen Prozessen darbietet. In der vorliegenden Arbeit wurden durch Kombination beider Methoden Prozesse der neuronalen und visuellen Verarbeitung von Emotionen untersucht. Zunächst wurde die Rolle von verschiedenen Gehirnregionen innerhalb des emotionalen Netzwerks sowie die Frage nach der Lateralität der emotionalen Verarbeitung untersucht. Die Ergebnisse zeigten, dass die neuronale Antwort in den unterschiedlichen Regionen in erster Linie die Anforderungen an die jeweilige funktionelle Einheit spiegelt. Im Rahmen der Untersuchungen von visueller Verarbeitung wurden die einzelnen spezifischen Blickbewegungsmuster für Emotionen Angst, Ekel und Freude erstmals charakterisiert. Es wurden auch Habituationseffekte auf die beschriebenen Blickbewegungsmuster untersucht. Die gemeinsame Analyse beider Datensätze zeigte, dass zwischen visuellen und neuronalen Prozessen eine enge qualitative Interaktion besteht. Es wurde ein Zusammenhang zwischen der Betrachtungsdauer und der tiefe der Verarbeitung nachgewiesen.
205

Visual Tracking and Motion Estimation for an On-orbit Servicing of a Satellite

Oumer, Nassir Workicho 28 September 2016 (has links)
This thesis addresses visual tracking of a non-cooperative as well as a partially cooperative satellite, to enable close-range rendezvous between a servicer and a target satellite. Visual tracking and estimation of relative motion between a servicer and a target satellite are critical abilities for rendezvous and proximity operation such as repairing and deorbiting. For this purpose, Lidar has been widely employed in cooperative rendezvous and docking missions. Despite its robustness to harsh space illumination, Lidar has high weight and rotating parts and consumes more power, thus undermines the stringent requirements of a satellite design. On the other hand, inexpensive on-board cameras can provide an effective solution, working at a wide range of distances. However, conditions of space lighting are particularly challenging for image based tracking algorithms, because of the direct sunlight exposure, and due to the glossy surface of the satellite that creates strong reflection and image saturation, which leads to difficulties in tracking procedures. In order to address these difficulties, the relevant literature is examined in the fields of computer vision, and satellite rendezvous and docking. Two classes of problems are identified and relevant solutions, implemented on a standard computer are provided. Firstly, in the absence of a geometric model of the satellite, the thesis presents a robust feature-based method with prediction capability in case of insufficient features, relying on a point-wise motion model. Secondly, we employ a robust model-based hierarchical position localization method to handle change of image features along a range of distances, and localize an attitude-controlled (partially cooperative) satellite. Moreover, the thesis presents a pose tracking method addressing ambiguities in edge-matching, and a pose detection algorithm based on appearance model learning. For the validation of the methods, real camera images and ground truth data, generated with a laboratory tet bed similar to space conditions are used. The experimental results indicate that camera based methods provide robust and accurate tracking for the approach of malfunctioning satellites in spite of the difficulties associated with specularities and direct sunlight. Also exceptional lighting conditions associated to the sun angle are discussed, aimed at achieving fully reliable localization system in a certain mission.
206

Conception et développement de composants logiciels et matériels pour un dispositif ophtalmique / Conception and development of software and hardware components for an ophtalmic device

Combier, Jessica 23 January 2019 (has links)
Les recherches menées au cours de cette thèse de Doctorat s'inscrivent dans les activités du laboratoire commun OPERA (OPtique EmbaRquée Active) impliquant ESSILOR-LUXOTTICA et le CNRS. L’objectif est de contribuer au développement des “lunettes du futur” intégrant des fonctions d'obscurcissement, de focalisation ou d'affichage qui s’adaptent en permanence à la scène et au regard de l’utilisateur. Ces nouveaux dispositifs devront être dotés de capacités de perception, de décision et d’action, et devront respecter des contraintes d'encombrement, de poids, de consommation énergétique et de temps de traitement. Ils présentent par conséquent des connexions évidentes avec la robotique. Dans ce contexte, les recherches ont consisté à investiguer la structure et la construction de tels systèmes afin d’identifier leurs enjeux et difficultés. Pour ce faire, la première tâche a été de mettre en place des émulateurs de divers types de lunettes actives, qui permettent de prototyper et d’évaluer efficacement diverses fonctions. Dans cette phase de prototypage et de test, ces émulateurs s’appuient naturellement sur une architecture logicielle modulaire typique de la robotique. La seconde partie de la thèse s'est focalisée sur le prototypage d’un composant clé des lunettes du futur, qui implique une contrainte supplémentaire de basse consommation : le système de suivi du regard, aussi appelé oculomètre. Le principe d’un assemblage de photodiodes et d’un traitement par réseau de neurones a été proposé. Un simulateur a été mis au point, ainsi qu’une étude de l'influence de l'agencement des photodiodes et de l’hyper-paramétrisation du réseau sur les performances de l'oculomètre. / The research carried out during this doctoral thesis takes place within the OPERA joint laboratory (OPtique EmbaRquée Active) involving ESSILOR-LUXOTTICA and the CNRS. The aim is to contribute to the development of "glasses of the future", which feature obscuration, focus or display capabilities that continuously adapt to the scene and the user gaze. These new devices will be endowed with perception, decision and action capabilities, and will have to respect constraints of space, weight, energy consumption and processing time. They therefore show obvious connections with robotics. In this context, the structure and building of such systems has been investigated in order to identify their issues and difficulties. To that end, the first task was to set up emulators of various types of active glasses, which enable the prototyping and effective testing of various functions. In this prototyping and testing phase, these emulators naturally rely on a modular software architecture typical of robotics. The second part of the thesis focused on the prototyping of a key component which implies an additional constraint on low consumption, namely the eye tracking system, also known as gaze tracker. The principle of a photodiode assembly and of a neural network processing has been proposed. A simulator has been developed, as well as a study of the influence of the arrangement of photodiodes and the hyper-parametrization of the network on the performance of the oculometer.
207

Demand Transition, Tracking Accuracy, and Stress: Resource-Depletion and -Allocation Models

Ungar, Nathaniel R. January 2005 (has links)
No description available.
208

UNMANNED AERIAL SYSTEM TRACKING IN URBAN CANYON ENVIRONMENTS USING EXTERNAL VISION

Zhanpeng Yang (13164648) 28 July 2022 (has links)
<p>Unmanned aerial systems (UASs) are at the intersection of robotics and aerospace re-<br> search. Their rise in popularity spurred the growth of interest in urban air mobility (UAM)<br> across the world. UAM promises the next generation of transportation and logistics to be<br> handled by UASs that operate closer to where people live and work. Therefore safety and<br> security of UASs are paramount for UAM operations. Monitoring UAS traffic is especially<br> challenging in urban canyon environments where traditional radar systems used for air traffic<br> control (ATC) are limited by their line of sight (LOS).<br> This thesis explores the design and preliminary results of a target tracking system for<br> urban canyon environments based on a network of camera nodes. A network of stationary<br> camera nodes can be deployed on a large scale to overcome the LOS issue in radar systems<br> as well as cover considerable urban airspace. A camera node consists of a camera sensor, a<br> beacon, a real-time kinematic (RTK) global navigation satellite system (GNSS) receiver, and<br> an edge computing device. By leveraging high-precision RTK GNSS receivers and beacons,<br> an automatic calibration process of the proposed system is devised to simplify the time-<br> consuming and tedious calibration of a traditional camera network present in motion capture<br> (MoCap) systems. Through edge computing devices, the tracking system combines machine<br> learning techniques and motion detection as hybrid measurement modes for potential targets.<br> Then particle filters are used to estimate target tracks in real-time within the airspace from<br> measurements obtained by the camera nodes. Simulation in a 40m×40m×15m tracking<br> volume shows an estimation error within 0.5m when tracking multiple targets. Moreover,<br> a scaled down physical test with off-the-shelf camera hardware is able to achieve tracking<br> error within 0.3m on a micro-UAS in real time.</p>
209

[en] COLLABORATIVE FACE TRACKING: A FRAMEWORK FOR THE LONG-TERM FACE TRACKING / [pt] RASTREAMENTO DE FACES COLABORATIVO: UMA METODOLOGIA PARA O RASTREAMENTO DE FACES AO LONGO PRAZO

VICTOR HUGO AYMA QUIRITA 22 March 2021 (has links)
[pt] O rastreamento visual é uma etapa essencial em diversas aplicações de visão computacional. Em particular, o rastreamento facial é considerado uma tarefa desafiadora devido às variações na aparência da face, devidas à etnia, gênero, presença de bigode ou barba e cosméticos, além de variações na aparência ao longo da sequência de vídeo, como deformações, variações em iluminação, movimentos abruptos e oclusões. Geralmente, os rastreadores são robustos a alguns destes fatores, porém não alcançam resultados satisfatórios ao lidar com múltiplos fatores ao mesmo tempo. Uma alternativa é combinar as respostas de diferentes rastreadores para alcançar resultados mais robustos. Este trabalho se insere neste contexto e propõe um novo método para a fusão de rastreadores escalável, robusto, preciso e capaz de manipular rastreadores independentemente de seus modelos. O método prevê ainda a integração de detectores de faces ao modelo de fusão de forma a aumentar a acurácia do rastreamento. O método proposto foi implementado para fins de validação, tendo sido testado em diversas configurações que combinaram até cinco rastreadores distintos e um detector de faces. Em testes realizados a partir de quatro sequências de vídeo que apresentam condições diversas de imageamento o método superou em acurácia os rastreadores utilizados individualmente. / [en] Visual tracking is fundamental in several computer vision applications. In particular, face tracking is challenging because of the variations in facial appearance, due to age, ethnicity, gender, facial hair, and cosmetics, as well as appearance variations in long video sequences caused by facial deformations, lighting conditions, abrupt movements, and occlusions. Generally, trackers are robust to some of these factors but do not achieve satisfactory results when dealing with combined occurrences. An alternative is to combine the results of different trackers to achieve more robust outcomes. This work fits into this context and proposes a new method for scalable, robust and accurate tracker fusion able to combine trackers regardless of their models. The method further provides the integration of face detectors into the fusion model to increase the tracking accuracy. The proposed method was implemented for validation purposes and was tested in different configurations that combined up to five different trackers and one face detector. In tests on four video sequences that present different imaging conditions the method outperformed the trackers used individually.
210

Design of a digital tracking control system for optical disk drive applications

Kadlec, Ronald James, 1960- January 1987 (has links)
A large spectrum of new technologies are being explored in the optical disk drive systems. Optics, lasers, media, and servomechanisms are a few examples. This thesis will be directed to the study of a servomechanism used in a majority of the optical disk drives, commonly referred to as the tracking servomechanism. The tracking servomechanism, consisting of a fine and a coarse actuator, is mechanically analyzed by the use of free body diagrams. A transfer function for each actuator is derived. Analog compensators are designed to achieve specific phase and gain margin requirements. A digital compensator is derived from the analog compensator by the use of a mapping technique. Major contributions of this thesis include studies to determine an acceptable sampling rate, number of bits, and computation delay associated with the implementation of a digital servo controller in a tracking servomechanism.

Page generated in 0.037 seconds