• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Classification non supervisée avec pondération d'attributs par des méthodes évolutionnaires

Blansché, Alexandre Korczak, Jerzy. Weber, Christiane. January 2007 (has links) (PDF)
Thèse doctorat : Informatique : Strasbourg 1 : 2006. / Titre provenant de l'écran-titre. Bibliogr. 10 p.
2

Classification images for contrast discrimination

McIlhagga, William H. 03 March 2021 (has links)
Yes / Contrast discrimination measures the smallest difference in contrast (the threshold) needed to successfully tell two stimuli apart. The contrast discrimination threshold typically increases with contrast. However, for low spatial frequency gratings the contrast threshold first increases, but then starts to decrease at contrasts above about 50%. This behaviour was originally observed in contrast discrimination experiments using dark spots as stimuli, suggesting that the contrast discrimination threshold for low spatial frequency gratings may be dominated by responses to the dark parts of the sinusoid. This study measures classification images for contrast discrimination experiments using a 1 cycle per degree sinusoidal grating at contrasts of 0, 25%, 50% and 75%. The classification images obtained clearly show that observers emphasize the darker parts of the sinusoidal grating (i.e. the troughs), and this emphasis increases with contrast. At 75% contrast, observers almost completely ignored the bright parts (peaks) of the sinusoid, and for some observers the emphasis on the troughs is already evident at contrasts as low as 25%. Analysis using a Hammerstein model suggests that the bias towards the dark parts of the stimulus is due to an early nonlinearity, perhaps similar to that proposed by Whittle.
3

Evidence for chromatic edge detectors in human vision using classification images

McIlhagga, William H., Mullen, K.T. 07 September 2018 (has links)
Yes / Edge detection plays an important role in human vision, and although it is clear that there are luminance edge detectors, it is not known whether there are chromatic edge detectors as well.We showed observers a horizontal edge blurred by a Gaussian filter (with widths of r ¼ 0.1125, 0.225, or 0.458) embedded in blurred Brown noise. Observers had to choose which of two stimuli contained the edge. Brown noise was used in preference to white noise to reveal localized edge detectors. Edges and noise were defined by either luminance or chromatic contrast (isoluminant L/M and S-cone opponent). Classification image analysis was applied to observer responses. In this analysis, the random components of the stimulus are correlated with observer responses to reveal a template that shows how observers weighted different parts of the stimulus to arrive at their decision.We found classification images for both luminance and isoluminant chromatic stimuli that had shapes very similar to derivatives of Gaussian filters. The widths of these classification images tracked the widths of the edges, but the chromatic edge classification images were wider than the luminance ones. These results are consistent with edge detection filters sensitive to luminance contrast and isoluminant chromatic contrast. / Royal Society Travel Grant IE130877 and in part by Canadian Institutes of Health Research (CIHR) grant MOP-10819
4

Estimates of edge detection filters in human vision

McIlhagga, William H. 10 October 2018 (has links)
Yes / Edge detection is widely believed to be an important early stage in human visual processing. However, there have been relatively few attempts to map human edge detection filters. In this study, observers had to locate a randomly placed step edge in brown noise (the integral of white noise) with a 1/𝑓2 power spectrum. Their responses were modelled by assuming the probability the observer chose an edge location depended on the response of their own edge detection filter to that location. The observer’s edge detection filter was then estimated by maximum likelihood methods. The filters obtained were odd-symmetric and similar to a derivative of Gaussian, with a peak-to-trough width of 0.1–0.15 degrees. These filters are compared with previous estimates of edge detectors in humans, and with neurophysiological receptive fields and theoretical edge detectors.
5

Identification des indices acoustiques utilisés lors de la compréhension de la parole dégradée / Identification of acoustic cues involved in degraded speech comprehension

Varnet, Léo 18 November 2015 (has links)
Bien qu’il existe un large consensus de la communauté scientifique quant au rôle des indices acoustiques dans la compréhension de la parole, les mécanismes exacts permettant la transformation d’un flux acoustique continu en unités linguistiques élémentaires demeurent aujourd’hui largement méconnus. Ceci est en partie dû à l’absence d’une méthodologie efficace pour l’identification et la caractérisation des primitives auditives de la parole. Depuis les premières études de l’interface acoustico-phonétique par les Haskins Laboratories dans les années 50, différentes approches ont été proposées ; cependant, toutes sont fondamentalement limitées par l’artificialité des stimuli utilisés, les contraintes du protocole expérimental et le poids des connaissances a priori nécessaires. Le présent travail de thèse s’est intéressé { la mise en oeuvre d’une nouvelle méthode tirant parti de la situation de compréhension de parole dégradée pour mettre en évidence les indices acoustiques utilisés par l’auditeur.Dans un premier temps, nous nous sommes appuyés sur la littérature dans le domaine visuel en adaptant la méthode des Images de Classification à une tâche auditive de catégorisation de phonèmes dans le bruit. En reliant la réponse de l’auditeur { chaque essai à la configuration précise du bruit lors de cet essai, au moyen d’un Modèle Linéaire Généralisé, il est possible d’estimer le poids des différentes régions temps-fréquence dans la décision. Nous avons illustré l’efficacité de notre méthode, appelée Image de Classification Auditive, à travers deux exemples : une catégorisation /aba/-/ada/, et une catégorisation /da/-/ga/ en contexte /al/ ou /aʁ/. Notre analyse a confirmé l’implication des attaques des formants F2 et F3, déjà suggérée par de précédentes études, mais a également permis de révéler des indices inattendus. Dans un second temps, nous avons employé cette technique pour comparer les résultats de participants musiciens experts (N=19) ou dyslexiques (N=18) avec ceux de participants contrôles. Ceci nous a permis d’étudier les spécificités des stratégies d’écoute de ces différents groupes.L’ensemble des résultats suggèrent que les Images de Classification Auditives pourraient constituer une nouvelle approche, plus précise et plus naturelle, pour explorer et décrire les mécanismes { l’oeuvre au niveau de l’interface acoustico-phonétique. / There is today a broad consensus in the scientific community regarding the involvement of acoustic cues in speech perception. Up to now, however, the precise mechanisms underlying the transformation from continuous acoustic stream into discrete linguistic units remain largely undetermined. This is partly due to the lack of an effective method for identifying and characterizing the auditory primitives of speech. Since the earliest studies on the acoustic–phonetic interface by the Haskins Laboratories in the 50’s, a number of approaches have been proposed; they are nevertheless inherently limited by the non-naturalness of the stimuli used, the constraints of the experimental apparatus, and the a priori knowledge needed. The present thesis aimed at introducing a new method capitalizing on the speech-in-noise situation for revealing the acoustic cues used by the listeners.As a first step, we adapted the Classification Image technique, developed in the visual domain, to a phoneme categorization task in noise. The technique relies on a Generalized Linear Model to link each participant’s response to the specific configuration of noise, on a trial-by-trail basis, thereby estimating the perceptual weighting of the different time-frequency regions for the decision. We illustrated the effectiveness of our Auditory Classification Image method through 2 examples: a /aba/-/ada/ categorization and a /da/-/ga/ categorization in context /al/ or /aʁ/. Our analysis confirmed that the F2 and F3 onsets were crucial for the tasks, as suggested in previous studies, but also revealed unexpected cues. In a second step, we relied on this new method to compare the results of musical experts (N=19) or dyslexics participants (N=18) to those of controls. This enabled us to explore the specificities of each group’s listening strategies.All the results taken together show that the Auditory Classification Image method may be a more precise and more straightforward approach to investigate the mechanisms at work at the acoustic-phonetic interface.
6

Optimal edge filters explain human blur detection

McIlhagga, William H., May, K.A. January 2012 (has links)
Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur.

Page generated in 0.121 seconds