• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 6
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Técnicas de visão computacional aplicadas ao reconhecimento de cenas naturais e locomoção autônoma em robôs agrícolas móveis / Computer vision techniques applied to natural scenes recognition and autonomous locomotion of agricultural mobile robots

Luciano Cássio Lulio 09 August 2011 (has links)
O emprego de sistemas computacionais na Agricultura de Precisão (AP) fomenta a automação de processos e tarefas aplicadas nesta área, precisamente voltadas à inspeção e análise de culturas agrícolas, e locomoção guiada/autônoma de robôs móveis. Neste contexto, no presente trabalho foi proposta a aplicação de técnicas de visão computacional nas tarefas citadas, desenvolvidas em abordagens distintas, a serem aplicadas em uma plataforma de robô móvel agrícola, em desenvolvimento no NEPAS/EESC/USP. Para o problema de locomoção do robô (primeira abordagem), foi desenvolvida uma arquitetura de aquisição, processamento e análise de imagens com o objetivo de segmentar, classificar e reconhecer padrões de navegação das linhas de plantio, como referências de guiagem do robô móvel, entre plantações de laranja, milho e cana. Na segunda abordagem, tais técnicas de processamento de imagens são aplicadas também na inspeção e localização das culturas laranja (primário) e milho (secundário), para análise de suas características naturais, localização e quantificação. Para as duas abordagens, a estratégia adotada nas etapas de processamento de imagens abrange: filtragem no domínio espacial das imagens adquiridas; pré-processamento nos espaços de cores RGB e HSV; segmentação não supervisionada JSEG customizada à quantização de cores em regiões não homogêneas nestes espaços de cores; normalização e extração de características dos histogramas das imagens pré-processadas para os conjuntos de treinamento e teste através da análise das componentes principais; reconhecimento de padrões e classificação cognitiva e estatística. A metodologia desenvolvida contemplou bases de dados para cada abordagem entre 700 e 900 imagens de cenas naturais sob condições distintas de aquisição, apresentando resultados significativos quanto ao algoritmo de segmentação nas duas abordagens, mas em menor grau em relação à localização de gramíneas, sendo que os milhos requerem outras técnicas de segmentação, que não aplicadas apenas em quantização de regiões não homogêneas. A classificação estatística, Bayes e Bayes Ingênuo, mostrou-se superior à cognitiva RNA e Fuzzy nas duas abordagens, e posterior construção dos mapas de classe no espaço de cores HSV. Neste mesmo espaço de cores, a quantificação e localização de frutos apresentaram melhores resultados que em RGB. Com isso, as cenas naturais nas duas abordagens foram devidamente processadas, de acordo com os materiais e métodos empregados na segmentação, classificação e reconhecimento de padrões, fornecendo características intrínsecas e distintas das técnicas de visão computacional propostas a cada abordagem. / The use of computer systems in Precision Agriculture (PA) promotes the processes automation and its applied tasks, specifically the inspection and analysis of agricultural crops, and guided/autonomous locomotion of mobile robots. In this context, it was proposed in the present work the application of computer vision techniques on such mentioned tasks, developed in different approaches, to be applied in an agricultural mobile robot platform, under development at NEPAS/EESC/USP. For agricultural mobile robot locomotion, an architecture for the acquisition, image processing and analysis was built, in order to segment, classify and recognize patterns of planting rows, as references way points for guiding the mobile robot. In the second approach, such image processing techniques were applied also in the inspection and location of the orange crop (primary) and maize crop (secondary) aiming its natural features, location and quantification. For both mentioned approaches, the adopted image processing steps include: filtering in the spatial domain for acquired images; pre-processing in RGB and HSV color spaces; JSEG unsupervised segmentation algorithm, applied to color quantization in non-homogeneous regions; normalization and histograms feature extraction of preprocessed images for training and test sets, fulfilled by the principal components analysis (PCA); pattern recognition and cognitive and statistical classification. The developed methodology includes sets of 700 and 900 images databases for each approach of natural scenes under different conditions of acquisition, providing great results on the segmentation algorithm, but not as appropriate as in the location of maize grass, considering other segmentation techniques, applied not only in the quantization of non-homogeneous regions. Statistical classification, Bayes and Naive Bayes, outperforms the cognitives Fuzzy and ANN on two approaches and subsequent class maps construction in HSV color space. Quantification and localization of fruits had more accurate results in HSV than RGB. Thus, natural scenes in two approaches were properly processed, according to the materials and methods employed in segmentation, classification and pattern recognition, providing intrinsic and different features of the proposed computer vision techniques to each approach.
12

Etude des processus attentionnels mis en jeu lors de l'exploration de scènes naturelles : enregistrement conjoint des mouvements oculaires et de l'activité EEG / The study of attentional processes involved during the exploration of natural scenes : joint registration of eye movements and EEG activity

Queste, Hélène 27 February 2014 (has links)
Dans la vie de tous les jours, lorsque nous regardons le monde qui nous entoure, nous bougeons constamment nos yeux. Notre regard se porte successivement sur différents endroits du champ visuel afin de capter l'information visuelle. Ainsi, nos yeux se stabilisent sur deux à trois régions différentes par seconde pendant des périodes appelées fixations. Entre deux fixations, nous réalisons des mouvements rapides des yeux pour déplacer notre regard vers une autre région ; on parle de saccades oculaires. Ces mouvements oculaires sont étroitement liés à l'attention. Quels sont les processus attentionnels mis en jeu lors de l'exploration de scènes ? Comment les facteurs liés à la scène ou à la consigne donnée pour l'exploration modifient-ils les paramètres des mouvements oculaires ? Comment ces modifications évoluent-elles au cours de l'exploration ? Dans cette thèse, nous proposons d'analyser conjointement les données oculométriques et électroencéphalographiques (EEG) pour mieux comprendre les processus attentionnels impliqués dans le traitement de l'information visuelle acquise pendant l'exploration de scènes. Nous étudions à la fois l'influence de facteurs de bas niveau, c'est-à-dire l'information visuelle contenue dans la scène et de haut niveau, c'est-à-dire la consigne donnée aux observateurs. Dans une première étude, nous avons considéré les facteurs de haut niveau à travers la modulation de la tâche à réaliser pour l'exploration des scènes. Nous avons choisi quatre tâches : l'exploration libre, la catégorisation, la recherche visuelle et l'organisation spatiale. Ces tâches ont été choisies car elles impliquent des traitements de l'information visuelle de nature différente et peuvent être classées en fonction de leur niveau de difficulté ou de demande attentionnelle. Dans une seconde étude, nous nous sommes focalisées plus particulièrement sur la recherche visuelle et l'influence de la contrainte temporelle. Enfin, dans une troisième étude, nous considérons les facteurs de bas niveau à travers l'influence d'un distracteur visuel perturbant l'exploration libre. Pour les deux premières études, nous avons enregistré conjointement les mouvements oculaires et les signaux EEG d'un grand nombre de sujets. L'analyse conjointe des signaux EEG et oculométriques permet de tirer profit des deux méthodes. L'oculométrie permet d'accéder aux mouvements oculaires et donc au déploiement de l'attention visuelle sur la scène. Elle permet de connaitre à quel moment et quels endroits de la scène sont regardés. L'EEG permet, avec une grande résolution temporelle, de mettre en avant des différences dans les processus attentionnels selon la condition expérimentale. Ainsi, nous avons montré des différences entre les tâches au niveau des potentiels évoqués par l'apparition de la scène et pour les fixations au cours de l'exploration. De plus, nous avons mis en évidence un lien fort entre le niveau global de l'activité EEG observée sur les régions frontales et les durées de fixation mais aussi des marqueurs de résolution de la tâche au niveau des potentiels évoqués liés à des fixations d'intérêt. L'analyse conjointe des données EEG et oculométriques permet donc de rendre compte des différences de traitement liées à différentes demandes attentionnelles. / In everyday life, when we explored the word, we moved continually our eyes. We focus your gaze successively on different location of the visual field, in order to get the visual information. In this way, our eyes became stable on two or three different regions per second, during period called fixation. Between two fixations, we make fast movements of the eyes to move our gaze to another position; it was called saccade. Eye movements are closely linked to attention. What are the attentional processes involved during scene exploration? How factors related to the scene or the task modify the parameters of eye movements? How these changes evolve during the exploration? In the thesis, we proposed to jointly analyze eye movements and electroencephalographic (EEG) data to better understand attentional processes involved during the processing of the visual information acquired during the exploration of scenes. We focused on low and high level factors. Low level factors corresponded to the visual information included in the scene and high level factors corresponded to the instruction give to observers. In a first study, we considered high level factors by manipulating the instructions for observers. We chose four tasks: free-exploration, categorization, visual search and spatial organization. These tasks were chosen because they involved different visual information processing and can be classified by level of difficulty or attentional demands. In a second study, we focused on the visual search task and on the influence of a time constraint. Finally, in a third study, we considered low level factors by analyzing the influence of a distractor disturbing the free-exploration of scenes. For the two first experiments, we jointly recorded eye movements and EEG signals of a large number of observers. The joint analysis of EEG and eye movement data takes advantage of the two methods. Eye tracking allowed to access to eye movements parameters and therefore to the visual attention deployment. It allowed knowing when and where the regions of the scene were gazed at. EEG allowed to access to differences on attentional processes depending on the experimental condition, with a high temporal resolution. We found differences between tasks for evoked potentials elicited by the scene onset and by fixations along the exploration. Furthermore, we demonstrated a strong link between the global EEG activity observed over frontal regions and fixation durations but also markers of the solving of the task on evoked potentials elicited by fixations of interest. Therefore, joint analysis of EEG and eye movement data allowed to report different processes related to attentional demanding.
13

Fusing integrated visual vocabularies-based bag of visual words and weighted colour moments on spatial pyramid layout for natural scene image classification

Alqasrawi, Yousef T. N., Neagu, Daniel, Cowling, Peter I. January 2013 (has links)
No / The bag of visual words (BOW) model is an efficient image representation technique for image categorization and annotation tasks. Building good visual vocabularies, from automatically extracted image feature vectors, produces discriminative visual words, which can improve the accuracy of image categorization tasks. Most approaches that use the BOW model in categorizing images ignore useful information that can be obtained from image classes to build visual vocabularies. Moreover, most BOW models use intensity features extracted from local regions and disregard colour information, which is an important characteristic of any natural scene image. In this paper, we show that integrating visual vocabularies generated from each image category improves the BOW image representation and improves accuracy in natural scene image classification. We use a keypoint density-based weighting method to combine the BOW representation with image colour information on a spatial pyramid layout. In addition, we show that visual vocabularies generated from training images of one scene image dataset can plausibly represent another scene image dataset on the same domain. This helps in reducing time and effort needed to build new visual vocabularies. The proposed approach is evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories, respectively, using 10-fold cross-validation. The experimental results, using support vector machines with histogram intersection kernel, show that the proposed approach outperforms baseline methods such as Gist features, rgbSIFT features and different configurations of the BOW model.
14

Compressive Sensing: Single Pixel SWIR Imaging of Natural Scenes

Brorsson, Andreas January 2018 (has links)
Photos captured in the shortwave infrared (SWIR) spectrum are interesting in military applications because they are independent of what time of day the pic- ture is captured because the sun, moon, stars and night glow illuminate the earth with short-wave infrared radiation constantly. A major problem with today’s SWIR cameras is that they are very expensive to produce and hence not broadly available either within the military or to civilians. Using a relatively new tech- nology called compressive sensing (CS), enables a new type of camera with only a single pixel sensor in the sensor (a SPC). This new type of camera only needs a fraction of measurements relative to the number of pixels to be reconstructed and reduces the cost of a short-wave infrared camera with a factor of 20. The camera uses a micromirror array (DMD) to select which mirrors (pixels) to be measured in the scene, thus creating an underdetermined linear equation system that can be solved using the techniques described in CS to reconstruct the im- age. Given the new technology, it is in the Swedish Defence Research Agency (FOI) interest to evaluate the potential of a single pixel camera. With a SPC ar- chitecture developed by FOI, the goal of this thesis was to develop methods for sampling, reconstructing images and evaluating their quality. This thesis shows that structured random matrices and fast transforms have to be used to enable high resolution images and speed up the process of reconstructing images signifi- cantly. The evaluation of the images could be done with standard measurements associated with camera evaluation and showed that the camera can reproduce high resolution images with relative high image quality in daylight.
15

Eine Symmetrie der visuellen Welt in der Architektur des visuellen Kortex. / A Symmetry of the Visual World in the Architecture of the Visual Cortex.

Schnabel, Michael 18 December 2008 (has links)
No description available.

Page generated in 0.0391 seconds