• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 13
  • 8
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 39
  • 29
  • 26
  • 26
  • 25
  • 24
  • 21
  • 20
  • 18
  • 14
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Satellite Image Processing with Biologically-inspired Computational Methods and Visual Attention

Sina, Md Ibne January 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
82

Traitement de stimuli sexuels visuels statiques par l’insula en EEG intracrânien : une étude de potentiels évoqués

Brideau-Duquette, Mathieu 08 1900 (has links)
No description available.
83

Saliency grouped landmarks for use in vision-based simultaneous localisation and mapping

Joubert, Deon January 2013 (has links)
The effective application of mobile robotics requires that robots be able to perform tasks with an extended degree of autonomy. Simultaneous localisation and mapping (SLAM) aids automation by providing a robot with the means of exploring an unknown environment while being able to position itself within this environment. Vision-based SLAM benefits from the large amounts of data produced by cameras but requires intensive processing of these data to obtain useful information. In this dissertation it is proposed that, as the saliency content of an image distils a large amount of the information present, it can be used to benefit vision-based SLAM implementations. The proposal is investigated by developing a new landmark for use in SLAM. Image keypoints are grouped together according to the saliency content of an image to form the new landmark. A SLAM system utilising this new landmark is implemented in order to demonstrate the viability of using the landmark. The landmark extraction, data filtering and data association routines necessary to make use of the landmark are discussed in detail. A Microsoft Kinect is used to obtain video images as well as 3D information of a viewed scene. The system is evaluated using computer simulations and real-world datasets from indoor structured environments. The datasets used are both newly generated and freely available benchmarking ones. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
84

Towards a Quantitative Evaluation of Layout Using Graphic Design Principles

Mosora, Daniel J. 15 May 2012 (has links)
No description available.
85

Determinants of the Acquisition of English Verb Tenses

Moore, Jana Eleanor January 2015 (has links)
This study investigated the acquisition of English tense and aspect through the manipulation of collostructional strength, instructional saliency, and frequency of use in group activities. Past research has focused on some of the factors in this study and their influence on acquisition, such as explicit instruction, but no research to date has attempted to compare the different factors to each other and attempt to create a working model of processing depth with these factors. Additionally, little research exists on the influence proficiency level and personal meaningfulness has on acquisition and in relation to these other determinants, or the role of lexical aspect in verb use and acquisition. The participants in this study were all females from a university in Japan. They were separated into different groups based upon their proficiency level, and each group was given a different treatment of group activities that focused on learning the simple past tense, present perfect, and past progressive over the course of a two week session. Pretests, immediate and delayed posttests were conducted to attempt to measure acquisition. MANCOVAs, Factorial MANCOVAs, and a Chi-Square test were all run to determine the outcome of the treatments. The results of the study suggest a loose continuum in terms of processing depth with explicit instruction as the most effective factor followed by frequency of use, and collostructional strength having minimal and conditional, effectiveness. The results also suggest the powerfulness of proficiency level as a determiner of whether acquisition will occur, with personal meaningfulness playing a lesser but still important role. The lexical aspect use of verbs appeared to show that the learners in this study leaned heavily on activity verbs and using the progressive aspect. Overall the results add to the growing collection of knowledge in understanding how learners develop their verb use as they acquire language. / Applied Linguistics
86

A new approach to automatic saliency identification in images based on irregularity of regions

Al-Azawi, Mohammad Ali Naji Said January 2015 (has links)
This research introduces an image retrieval system which is, in different ways, inspired by the human vision system. The main problems with existing machine vision systems and image understanding are studied and identified, in order to design a system that relies on human image understanding. The main improvement of the developed system is that it uses the human attention principles in the process of image contents identification. Human attention shall be represented by saliency extraction algorithms, which extract the salient regions or in other words, the regions of interest. This work presents a new approach for the saliency identification which relies on the irregularity of the region. Irregularity is clearly defined and measuring tools developed. These measures are derived from the formality and variation of the region with respect to the surrounding regions. Both local and global saliency have been studied and appropriate algorithms were developed based on the local and global irregularity defined in this work. The need for suitable automatic clustering techniques motivate us to study the available clustering techniques and to development of a technique that is suitable for salient points clustering. Based on the fact that humans usually look at the surrounding region of the gaze point, an agglomerative clustering technique is developed utilising the principles of blobs extraction and intersection. Automatic thresholding was needed in different stages of the system development. Therefore, a Fuzzy thresholding technique was developed. Evaluation methods of saliency region extraction have been studied and analysed; subsequently we have developed evaluation techniques based on the extracted regions (or points) and compared them with the ground truth data. The proposed algorithms were tested against standard datasets and compared with the existing state-of-the-art algorithms. Both quantitative and qualitative benchmarking are presented in this thesis and a detailed discussion for the results has been included. The benchmarking showed promising results in different algorithms. The developed algorithms have been utilised in designing an integrated saliency-based image retrieval system which uses the salient regions to give a description for the scene. The system auto-labels the objects in the image by identifying the salient objects and gives labels based on the knowledge database contents. In addition, the system identifies the unimportant part of the image (background) to give a full description for the scene.
87

Contribution à la perception visuelle multi-résolution de l’environnement 3D : application à la robotique autonome / Contribution to the visual perception multi-resolution of the 3D environment : application to autonomous robotics

Fraihat, Hossam 19 December 2017 (has links)
Le travail de recherche effectué dans le cadre de cette thèse concerne le développement d’un système de perception de la saillance en environnement 3D en tirant l’avantage d’une représentation pseudo-3D. Notre contribution et concept issue de celle-ci part de l'hypothèse que la profondeur de l’objet par rapport au robot est un facteur important dans la détection de la saillance. Sur ce principe, un système de vision saillante de l’environnement 3D a été proposé, conçu et validée sur une plateforme comprenant un robot équipé d’un capteur pseudo-3D. La mise en œuvre du concept précité et sa conception ont été d’abord validés sur le système de vision pseudo-3D KINECT. Puis dans une deuxième étape, le concept et les algorithmes mis aux points ont été étendus à la plateforme précitée. Les principales contributions de la présente thèse peuvent être résumées de la manière suivante : A) Un état de l'art sur les différents capteurs d'acquisition de l’information de la profondeur ainsi que les différentes méthodes de la détection de la saillance 2D et pseudo 3D. B) Etude d’un système basé sur la saillance visuelle pseudo 3D réalisée grâce au développement d’un algorithme robuste permettant la détection d'objets saillants dans l’environnement 3D. C) réalisation d’un système d’estimation de la profondeur en centimètres pour le robot Pepper. D) La mise en œuvre des concepts et des méthodes proposés sur la plateforme précitée. Les études et les validations expérimentales réalisées ont notamment confirmé que les approches proposées permettent d’accroitre l’autonomie des robots dans un environnement 3D réel / The research work, carried out within the framework of this thesis, concerns the development of a system of perception and saliency detection in 3D environment taking advantage from a pseudo-3D representation. Our contribution and the issued concept derive from the hypothesis that the depth of the object with respect to the robot is an important factor in the detection of the saliency. On this basis, a salient vision system of the 3D environment has been proposed, designed and validated on a platform including a robot equipped with a pseudo-3D sensor. The implementation of the aforementioned concept and its design were first validated on the pseudo-3D KINECT vision system. Then, in a second step, the concept and the algorithms have been extended to the aforementioned robotic platform. The main contributions of the present thesis can be summarized as follow: A) A state of the art on the various sensors for acquiring depth information as well as different methods of detecting 2D salience and pseudo 3D. B) Study of pseudo-3D visual saliency system based on benefiting from the development of a robust algorithm allowing the detection of salient objects. C) Implementation of a depth estimation system in centimeters for the Pepper robot. D) Implementation of the concepts and methods proposed on the aforementioned platform. The carried out studies and the experimental validations confirmed that the proposed approaches allow to increase the autonomy of the robots in a real 3D environment
88

Contribution à la perception et l’attention visuelle artificielle bio-inspirée pour acquisition et conceptualisation de la connaissance en robotique autonome / Contribution to Perception and Artificial Bio-inspired Visual Attention for Acquisition and Conceptualization of Knowledge in Autonomous Robotics

Kachurka, Viachaslau 20 December 2017 (has links)
La présente thèse du domaine de la « Perception Bio-inspirée » se focalise plus particulièrement sur l’Attention Visuelle Artificielle et la Saillance Visuelle. Un concept de l’Attention Visuelle Artificielle inspiré du vivant, conduisant un modèle d’une telle attention artificielle bio-inspirée, a été élaboré, mis en œuvre et testé dans le contexte de la robotique autonome. En effet, bien qu’il existe plusieurs dizaines de modèles de la saillance visuelle, à la fois en termes de contraste et de cognition, il n’existe pas de modèle hybridant les deux mécanismes d’attention : l’aspect visuel et l’aspect cognitif.Pour créer un tel modèle, nous avons exploré les approches existantes dans le domaine de l’attention visuelle, ainsi que plusieurs approches et paradigmes relevant des domaines connexes (tels que la reconnaissance d’objets, apprentissage artificiel, classification, etc.).Une architecture fonctionnelle d’un système d’attention visuelle hybride, combinant des principes et des mécanismes issus de l’attention visuelle humaine avec des méthodes calculatoires et algorithmiques, a été mise en œuvre, expliquée et détaillée.Une autre contribution majeure du présent travail doctoral est la modélisation théorique, le développement et l’application pratique du modèle d’Attention Visuelle bio-inspiré précité, pouvant constituer un socle pour l’autonomie des systèmes robotisés d’assistance.Les études menées ont conclu à la validation expérimentale des modèles proposés, confirmant la pertinence de l’approche proposée dans l’accroissement de l’autonomie des systèmes robotisés – et ceci dans un environnement réel / Dealing with the field of "Bio-inspired Perception", the present thesis focuses more particularly on Artificial Visual Attention and Visual Saliency. A concept of Artificial Visual Attention, inspired from the human mechanisms, providing a model of such artificial bio-inspired attention, was developed, implemented and tested in the context of autonomous robotics. Although there are several models of visual saliency, in terms of contrast and cognition, there is no hybrid model integrating both mechanisms of attention: the visual aspect and the cognitive aspect.To carryout such a model, we have explored existing approaches in the field of visual attention, as well as several approaches and paradigms in related fields (such as object recognition, artificial learning, classification, etc.).A functional architecture of a hybrid visual attention system, combining principles and mechanisms derived from human visual attention with computational and algorithmic methods, was implemented, explained and detailed.Another major contribution of this doctoral work is the theoretical modeling, development and practical application of the aforementioned Bio-inspired Visual Attention model, providing a basis for the autonomy of assistance-robotic systems.The carried out studies and experimental validation of the proposed models confirmed the relevance of the proposed approach in increasing the autonomy of robotic systems within a real environment
89

Saliency processing in the human brain

Bogler, Carsten 01 September 2014 (has links)
Aufmerksamkeit auf visuelle Reize kann durch top-down Such- Strategien oder durch bottom-up Eigenschaften des visuellen Reizes gesteuert werden. Die Eigenschaft einer bestimmten Position, aus einer visuellen Szene heraus zu stechen, wird als Salienz bezeichnet. Es wird angenommen, dass auf neuronaler Ebene eine Salienzkarte existiert. Bis heute ist strittig, wo die Repräsentation einer solchen Karte im Gehirn lokalisiert sein könnte. Im Rahmen dieser Dissertation wurden drei Experimente durchgeführt, die verschiedene Aspekte von bottom-up Salienz-Verarbeitung mit Hilfe der funktionellen Magnetresonanztomographie untersuchten. Während die Aufmerksamkeit auf einen Fixationspunkt gerichtet war, wurde die neuronale Reaktion auf unterschiedlich saliente Stimuli in der Peripherie untersucht. In den ersten zwei Experimenten wurde die neuronale Antwort auf Orientierungskontrast und Luminanzkontrast untersucht. Die Ergebnisse deuten darauf hin, dass Salienz möglicherweise verteilt im visuellen System kodiert ist. Im dritten Experiment wurden natürliche Szenen als Stimuli verwendet. Im Einklang mit den Ergebnissen der ersten beiden Experimente wurde hier graduierte Salienz in frühen und späten visuellen Arealen identifiziert. Darüber hinaus konnten Informationen über die salientesten Positionen aus weiter anterior liegenden Arealen, wie dem anterioren intraparietalen Sulcus (aIPS) und dem frontalen Augenfeld (FAF), dekodiert werden. Zusammengenommen deuten die Ergebnisse auf eine verteilte Salienzverarbeitung von unterschiedlichen low-level Merkmalen in frühen und späten visuellen Arealen hin, die möglicherweise zu einer merkmalsunabhängigen Salienzrepräsentation im posterioren intraparetalen Sulcus zusammengefasst werden. Verschiebungen der Aufmerksamkeit zu den salientesten Positionen werden dann im aIPS und im FAF vorbereitet. Da die Probanden mit einer Fixationsaufgabe beschäftigt waren, wird die Salienz vermutlich automatisch verarbeitet. / Attention to visual stimuli can be guided by top-down search strategies or by bottom-up information. The property of a specific position to stand out in a visual scene is referred to as saliency. On the neural level, a representation of a saliency map is assumed to exist. However, to date it is still unclear where such a representation is located in the brain. This dissertation describes three experiments that investigated different aspects of bottom-up saliency processing in the human brain using functional magnetic resonance imaging (fMRI). Neural responses to different salient stimuli presented in the periphery were investigated while top-down attention was directed to the central fixation point. The first two experiments investigated the neural responses to orientation contrast and to luminance contrast. The results indicate that saliency is potentially encoded in a distributed fashion in the visual system and that a feature-independent saliency map is calculated late in the processing hierarchy. The third experiment used natural scenes as stimuli. Consistent with the results of the other two experiments, graded saliency was identified in striate and extrastriate visual cortex, in particular in posterior intraparietal sulcus (pIPS), potentially reflecting a representation of feature-independent saliency. Additionally information about the most salient positions could be decoded in more anterior brain regions, namely in anterior intraparietal sulcus (aIPS) and frontal eye fields (FEF). Taken together, the results suggest a distributed saliency processing of different low-level features in striate and extrastriate cortex that is potentially integrated to a feature-independent saliency representation in pIPS. Shifts of attention to the most salient positions are then prepared in aIPS and FEF. As participants were engaged in a fixation task, the saliency is presumably processed in an automatic manner.
90

Reconnaissance perceptuelle des objets d’Intérêt : application à l’interprétation des activités instrumentales de la vie quotidienne pour les études de démence / Perceptual object of interest recognition : application to the interpretation of instrumental activities of daily living for dementia studies

Buso, Vincent 30 November 2015 (has links)
Cette thèse est motivée par le diagnostic, l’évaluation, la maintenance et la promotion de l’indépendance des personnes souffrant de maladies démentielles pour leurs activités de la vie quotidienne. Dans ce contexte nous nous intéressons à la reconnaissance automatique des activités de la vie quotidienne.L’analyse des vidéos de type égocentriques (où la caméra est posée sur une personne) a récemment gagné beaucoup d’intérêt en faveur de cette tâche. En effet de récentes études démontrent l’importance cruciale de la reconnaissance des objets actifs (manipulés ou observés par le patient) pour la reconnaissance d’activités et les vidéos égocentriques présentent l’avantage d’avoir une forte différenciation entre les objets actifs et passifs (associés à l’arrière plan). Une des approches récentes envers la reconnaissance des éléments actifs dans une scène est l’incorporation de la saillance visuelle dans les algorithmes de reconnaissance d’objets. Modéliser le processus sélectif du système visuel humain représente un moyen efficace de focaliser l’analyse d’une scène vers les endroits considérés d’intérêts ou saillants,qui, dans les vidéos égocentriques, correspondent fortement aux emplacements des objets d’intérêt. L’objectif de cette thèse est de permettre au systèmes de reconnaissance d’objets de fournir une détection plus précise des objets d’intérêts grâce à la saillance visuelle afin d’améliorer les performances de reconnaissances d’activités de la vie de tous les jours. Cette thèse est menée dans le cadre du projet Européen Dem@care.Concernant le vaste domaine de la modélisation de la saillance visuelle, nous étudions et proposons une contribution à la fois dans le domaine "Bottom-up" (regard attiré par des stimuli) que dans le domaine "Top-down" (regard attiré par la sémantique) qui ont pour but d’améliorer la reconnaissance d’objets actifs dans les vidéos égocentriques. Notre première contribution pour les modèles Bottom-up prend racine du fait que les observateurs d’une vidéo sont normalement attirés par le centre de celle-ci. Ce phénomène biologique s’appelle le biais central. Dans les vidéos égocentriques cependant, cette hypothèse n’est plus valable.Nous proposons et étudions des modèles de saillance basés sur ce phénomène de biais non central.Les modèles proposés sont entrainés à partir de fixations d’oeil enregistrées et incorporées dans des modèles spatio-temporels. Lorsque comparés à l’état-de-l’art des modèles Bottom-up, ceux que nous présentons montrent des résultats prometteurs qui illustrent la nécessité d’un modèle géométrique biaisé non-centré dans ce type de vidéos. Pour notre contribution dans le domaine Top-down, nous présentons un modèle probabiliste d’attention visuelle pour la reconnaissance d’objets manipulés dans les vidéos égocentriques. Bien que les bras soient souvent source d’occlusion des objets et considérés comme un fardeau, ils deviennent un atout dans notre approche. En effet nous extrayons à la fois des caractéristiques globales et locales permettant d’estimer leur disposition géométrique. Nous intégrons cette information dans un modèle probabiliste, avec équations de mise a jour pour optimiser la vraisemblance du modèle en fonction de ses paramètres et enfin générons les cartes d’attention visuelle pour la reconnaissance d’objets manipulés. [...] / The rationale and motivation of this PhD thesis is in the diagnosis, assessment,maintenance and promotion of self-independence of people with dementia in their InstrumentalActivities of Daily Living (IADLs). In this context a strong focus is held towardsthe task of automatically recognizing IADLs. Egocentric video analysis (cameras worn by aperson) has recently gained much interest regarding this goal. Indeed recent studies havedemonstrated how crucial is the recognition of active objects (manipulated or observedby the person wearing the camera) for the activity recognition task and egocentric videospresent the advantage of holding a strong differentiation between active and passive objects(associated to background). One recent approach towards finding active elements in a sceneis the incorporation of visual saliency in the object recognition paradigms. Modeling theselective process of human perception of visual scenes represents an efficient way to drivethe scene analysis towards particular areas considered of interest or salient, which, in egocentricvideos, strongly corresponds to the locus of objects of interest. The objective of thisthesis is to design an object recognition system that relies on visual saliency-maps to providemore precise object representations, that are robust against background clutter and, therefore,improve the recognition of active object for the IADLs recognition task. This PhD thesisis conducted in the framework of the Dem@care European project.Regarding the vast field of visual saliency modeling, we investigate and propose a contributionin both Bottom-up (gaze driven by stimuli) and Top-down (gaze driven by semantics)areas that aim at enhancing the particular task of active object recognition in egocentricvideo content. Our first contribution on Bottom-up models originates from the fact thatobservers are attracted by a central stimulus (the center of an image). This biological phenomenonis known as central bias. In egocentric videos however this hypothesis does not alwayshold. We study saliency models with non-central bias geometrical cues. The proposedvisual saliency models are trained based on eye fixations of observers and incorporated intospatio-temporal saliency models. When compared to state of the art visual saliency models,the ones we present show promising results as they highlight the necessity of a non-centeredgeometric saliency cue. For our top-down model contribution we present a probabilisticvisual attention model for manipulated object recognition in egocentric video content. Althougharms often occlude objects and are usually considered as a burden for many visionsystems, they become an asset in our approach, as we extract both global and local featuresdescribing their geometric layout and pose, as well as the objects being manipulated. We integratethis information in a probabilistic generative model, provide update equations thatautomatically compute the model parameters optimizing the likelihood of the data, and designa method to generate maps of visual attention that are later used in an object-recognitionframework. This task-driven assessment reveals that the proposed method outperforms thestate-of-the-art in object recognition for egocentric video content. [...]

Page generated in 0.0436 seconds